Test Report: KVM_Linux_crio 20604

                    
                      18ead8fd12890e86b803b7c091eba160ddf37424:2025-04-08:39059
                    
                

Test fail (9/328)

x
+
TestAddons/parallel/Ingress (154.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-835623 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-835623 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-835623 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d2d98d15-fee7-4d69-9366-18b8df6b682f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d2d98d15-fee7-4d69-9366-18b8df6b682f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004795076s
I0408 18:16:16.113493  148487 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-835623 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.492388977s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-835623 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-835623 -n addons-835623
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-835623 logs -n 25: (1.290722986s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-115248                                                                     | download-only-115248 | jenkins | v1.35.0 | 08 Apr 25 18:13 UTC | 08 Apr 25 18:13 UTC |
	| delete  | -p download-only-264168                                                                     | download-only-264168 | jenkins | v1.35.0 | 08 Apr 25 18:13 UTC | 08 Apr 25 18:13 UTC |
	| delete  | -p download-only-115248                                                                     | download-only-115248 | jenkins | v1.35.0 | 08 Apr 25 18:13 UTC | 08 Apr 25 18:13 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-788533 | jenkins | v1.35.0 | 08 Apr 25 18:13 UTC |                     |
	|         | binary-mirror-788533                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33699                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-788533                                                                     | binary-mirror-788533 | jenkins | v1.35.0 | 08 Apr 25 18:13 UTC | 08 Apr 25 18:13 UTC |
	| addons  | disable dashboard -p                                                                        | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:13 UTC |                     |
	|         | addons-835623                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:13 UTC |                     |
	|         | addons-835623                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-835623 --wait=true                                                                | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:13 UTC | 08 Apr 25 18:15 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-835623 addons disable                                                                | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:15 UTC | 08 Apr 25 18:15 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-835623 addons disable                                                                | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:15 UTC | 08 Apr 25 18:15 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:15 UTC | 08 Apr 25 18:15 UTC |
	|         | -p addons-835623                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-835623 addons                                                                        | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:15 UTC | 08 Apr 25 18:15 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-835623 addons disable                                                                | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:15 UTC | 08 Apr 25 18:15 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-835623 ssh cat                                                                       | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:15 UTC | 08 Apr 25 18:15 UTC |
	|         | /opt/local-path-provisioner/pvc-19f0b297-ccd7-4f7e-8774-5015739a28ea_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-835623 addons disable                                                                | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:15 UTC | 08 Apr 25 18:16 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-835623 addons disable                                                                | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:15 UTC | 08 Apr 25 18:16 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-835623 ip                                                                            | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:15 UTC | 08 Apr 25 18:15 UTC |
	| addons  | addons-835623 addons disable                                                                | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:15 UTC | 08 Apr 25 18:15 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-835623 addons                                                                        | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:16 UTC | 08 Apr 25 18:16 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-835623 addons                                                                        | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:16 UTC | 08 Apr 25 18:16 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-835623 addons                                                                        | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:16 UTC | 08 Apr 25 18:16 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-835623 ssh curl -s                                                                   | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:16 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-835623 addons                                                                        | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:16 UTC | 08 Apr 25 18:16 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-835623 addons                                                                        | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:16 UTC | 08 Apr 25 18:16 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-835623 ip                                                                            | addons-835623        | jenkins | v1.35.0 | 08 Apr 25 18:18 UTC | 08 Apr 25 18:18 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 18:13:02
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 18:13:02.753248  149099 out.go:345] Setting OutFile to fd 1 ...
	I0408 18:13:02.753556  149099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:13:02.753567  149099 out.go:358] Setting ErrFile to fd 2...
	I0408 18:13:02.753574  149099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:13:02.753816  149099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 18:13:02.754542  149099 out.go:352] Setting JSON to false
	I0408 18:13:02.755436  149099 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6928,"bootTime":1744129055,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 18:13:02.755571  149099 start.go:139] virtualization: kvm guest
	I0408 18:13:02.757908  149099 out.go:177] * [addons-835623] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 18:13:02.759594  149099 notify.go:220] Checking for updates...
	I0408 18:13:02.759624  149099 out.go:177]   - MINIKUBE_LOCATION=20604
	I0408 18:13:02.761398  149099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:13:02.763070  149099 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 18:13:02.764692  149099 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 18:13:02.766256  149099 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 18:13:02.767797  149099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 18:13:02.769415  149099 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 18:13:02.805724  149099 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 18:13:02.807427  149099 start.go:297] selected driver: kvm2
	I0408 18:13:02.807471  149099 start.go:901] validating driver "kvm2" against <nil>
	I0408 18:13:02.807490  149099 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 18:13:02.808412  149099 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 18:13:02.808541  149099 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20604-141129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 18:13:02.827173  149099 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 18:13:02.827239  149099 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 18:13:02.827485  149099 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 18:13:02.827520  149099 cni.go:84] Creating CNI manager for ""
	I0408 18:13:02.827546  149099 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 18:13:02.827555  149099 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 18:13:02.827624  149099 start.go:340] cluster config:
	{Name:addons-835623 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-835623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:13:02.827729  149099 iso.go:125] acquiring lock: {Name:mk6f89956dcd0ccd06b3c273592988c0e077c69a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 18:13:02.830799  149099 out.go:177] * Starting "addons-835623" primary control-plane node in "addons-835623" cluster
	I0408 18:13:02.832195  149099 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 18:13:02.832252  149099 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 18:13:02.832264  149099 cache.go:56] Caching tarball of preloaded images
	I0408 18:13:02.832358  149099 preload.go:172] Found /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 18:13:02.832371  149099 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 18:13:02.832680  149099 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/config.json ...
	I0408 18:13:02.832706  149099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/config.json: {Name:mk46aa255331e43d278b39bb04e894f4aeaa90ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:02.832992  149099 start.go:360] acquireMachinesLock for addons-835623: {Name:mk9f7a747fe5c51efa93431b771c455683360918 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 18:13:02.833106  149099 start.go:364] duration metric: took 86.904µs to acquireMachinesLock for "addons-835623"
	I0408 18:13:02.833136  149099 start.go:93] Provisioning new machine with config: &{Name:addons-835623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-835623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 18:13:02.833226  149099 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 18:13:02.835069  149099 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0408 18:13:02.835240  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:02.835283  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:02.851758  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40421
	I0408 18:13:02.852429  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:02.853231  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:02.853259  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:02.853994  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:02.854287  149099 main.go:141] libmachine: (addons-835623) Calling .GetMachineName
	I0408 18:13:02.854505  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:02.854732  149099 start.go:159] libmachine.API.Create for "addons-835623" (driver="kvm2")
	I0408 18:13:02.854767  149099 client.go:168] LocalClient.Create starting
	I0408 18:13:02.854817  149099 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem
	I0408 18:13:03.219995  149099 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem
	I0408 18:13:03.288778  149099 main.go:141] libmachine: Running pre-create checks...
	I0408 18:13:03.288808  149099 main.go:141] libmachine: (addons-835623) Calling .PreCreateCheck
	I0408 18:13:03.289497  149099 main.go:141] libmachine: (addons-835623) Calling .GetConfigRaw
	I0408 18:13:03.290110  149099 main.go:141] libmachine: Creating machine...
	I0408 18:13:03.290130  149099 main.go:141] libmachine: (addons-835623) Calling .Create
	I0408 18:13:03.290397  149099 main.go:141] libmachine: (addons-835623) creating KVM machine...
	I0408 18:13:03.290418  149099 main.go:141] libmachine: (addons-835623) creating network...
	I0408 18:13:03.292154  149099 main.go:141] libmachine: (addons-835623) DBG | found existing default KVM network
	I0408 18:13:03.293218  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:03.293011  149122 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001165d0}
	I0408 18:13:03.293297  149099 main.go:141] libmachine: (addons-835623) DBG | created network xml: 
	I0408 18:13:03.293326  149099 main.go:141] libmachine: (addons-835623) DBG | <network>
	I0408 18:13:03.293341  149099 main.go:141] libmachine: (addons-835623) DBG |   <name>mk-addons-835623</name>
	I0408 18:13:03.293358  149099 main.go:141] libmachine: (addons-835623) DBG |   <dns enable='no'/>
	I0408 18:13:03.293367  149099 main.go:141] libmachine: (addons-835623) DBG |   
	I0408 18:13:03.293378  149099 main.go:141] libmachine: (addons-835623) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0408 18:13:03.293386  149099 main.go:141] libmachine: (addons-835623) DBG |     <dhcp>
	I0408 18:13:03.293391  149099 main.go:141] libmachine: (addons-835623) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0408 18:13:03.293396  149099 main.go:141] libmachine: (addons-835623) DBG |     </dhcp>
	I0408 18:13:03.293400  149099 main.go:141] libmachine: (addons-835623) DBG |   </ip>
	I0408 18:13:03.293405  149099 main.go:141] libmachine: (addons-835623) DBG |   
	I0408 18:13:03.293411  149099 main.go:141] libmachine: (addons-835623) DBG | </network>
	I0408 18:13:03.293421  149099 main.go:141] libmachine: (addons-835623) DBG | 
	I0408 18:13:03.300091  149099 main.go:141] libmachine: (addons-835623) DBG | trying to create private KVM network mk-addons-835623 192.168.39.0/24...
	I0408 18:13:03.388175  149099 main.go:141] libmachine: (addons-835623) DBG | private KVM network mk-addons-835623 192.168.39.0/24 created
	I0408 18:13:03.388219  149099 main.go:141] libmachine: (addons-835623) setting up store path in /home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623 ...
	I0408 18:13:03.388237  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:03.388135  149122 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 18:13:03.388247  149099 main.go:141] libmachine: (addons-835623) building disk image from file:///home/jenkins/minikube-integration/20604-141129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0408 18:13:03.388387  149099 main.go:141] libmachine: (addons-835623) Downloading /home/jenkins/minikube-integration/20604-141129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20604-141129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0408 18:13:03.693693  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:03.693520  149122 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa...
	I0408 18:13:04.082340  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:04.082162  149122 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/addons-835623.rawdisk...
	I0408 18:13:04.082377  149099 main.go:141] libmachine: (addons-835623) DBG | Writing magic tar header
	I0408 18:13:04.082394  149099 main.go:141] libmachine: (addons-835623) DBG | Writing SSH key tar header
	I0408 18:13:04.082405  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:04.082317  149122 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623 ...
	I0408 18:13:04.082432  149099 main.go:141] libmachine: (addons-835623) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623
	I0408 18:13:04.082440  149099 main.go:141] libmachine: (addons-835623) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube/machines
	I0408 18:13:04.082448  149099 main.go:141] libmachine: (addons-835623) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 18:13:04.082454  149099 main.go:141] libmachine: (addons-835623) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129
	I0408 18:13:04.082464  149099 main.go:141] libmachine: (addons-835623) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0408 18:13:04.082537  149099 main.go:141] libmachine: (addons-835623) DBG | checking permissions on dir: /home/jenkins
	I0408 18:13:04.082558  149099 main.go:141] libmachine: (addons-835623) DBG | checking permissions on dir: /home
	I0408 18:13:04.082566  149099 main.go:141] libmachine: (addons-835623) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623 (perms=drwx------)
	I0408 18:13:04.082583  149099 main.go:141] libmachine: (addons-835623) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube/machines (perms=drwxr-xr-x)
	I0408 18:13:04.082593  149099 main.go:141] libmachine: (addons-835623) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube (perms=drwxr-xr-x)
	I0408 18:13:04.082602  149099 main.go:141] libmachine: (addons-835623) DBG | skipping /home - not owner
	I0408 18:13:04.082616  149099 main.go:141] libmachine: (addons-835623) setting executable bit set on /home/jenkins/minikube-integration/20604-141129 (perms=drwxrwxr-x)
	I0408 18:13:04.082628  149099 main.go:141] libmachine: (addons-835623) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 18:13:04.082639  149099 main.go:141] libmachine: (addons-835623) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 18:13:04.082646  149099 main.go:141] libmachine: (addons-835623) creating domain...
	I0408 18:13:04.083800  149099 main.go:141] libmachine: (addons-835623) define libvirt domain using xml: 
	I0408 18:13:04.083840  149099 main.go:141] libmachine: (addons-835623) <domain type='kvm'>
	I0408 18:13:04.083923  149099 main.go:141] libmachine: (addons-835623)   <name>addons-835623</name>
	I0408 18:13:04.083949  149099 main.go:141] libmachine: (addons-835623)   <memory unit='MiB'>4000</memory>
	I0408 18:13:04.083960  149099 main.go:141] libmachine: (addons-835623)   <vcpu>2</vcpu>
	I0408 18:13:04.083974  149099 main.go:141] libmachine: (addons-835623)   <features>
	I0408 18:13:04.083997  149099 main.go:141] libmachine: (addons-835623)     <acpi/>
	I0408 18:13:04.084012  149099 main.go:141] libmachine: (addons-835623)     <apic/>
	I0408 18:13:04.084019  149099 main.go:141] libmachine: (addons-835623)     <pae/>
	I0408 18:13:04.084025  149099 main.go:141] libmachine: (addons-835623)     
	I0408 18:13:04.084030  149099 main.go:141] libmachine: (addons-835623)   </features>
	I0408 18:13:04.084039  149099 main.go:141] libmachine: (addons-835623)   <cpu mode='host-passthrough'>
	I0408 18:13:04.084059  149099 main.go:141] libmachine: (addons-835623)   
	I0408 18:13:04.084066  149099 main.go:141] libmachine: (addons-835623)   </cpu>
	I0408 18:13:04.084108  149099 main.go:141] libmachine: (addons-835623)   <os>
	I0408 18:13:04.084131  149099 main.go:141] libmachine: (addons-835623)     <type>hvm</type>
	I0408 18:13:04.084138  149099 main.go:141] libmachine: (addons-835623)     <boot dev='cdrom'/>
	I0408 18:13:04.084146  149099 main.go:141] libmachine: (addons-835623)     <boot dev='hd'/>
	I0408 18:13:04.084154  149099 main.go:141] libmachine: (addons-835623)     <bootmenu enable='no'/>
	I0408 18:13:04.084159  149099 main.go:141] libmachine: (addons-835623)   </os>
	I0408 18:13:04.084166  149099 main.go:141] libmachine: (addons-835623)   <devices>
	I0408 18:13:04.084176  149099 main.go:141] libmachine: (addons-835623)     <disk type='file' device='cdrom'>
	I0408 18:13:04.084187  149099 main.go:141] libmachine: (addons-835623)       <source file='/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/boot2docker.iso'/>
	I0408 18:13:04.084211  149099 main.go:141] libmachine: (addons-835623)       <target dev='hdc' bus='scsi'/>
	I0408 18:13:04.084216  149099 main.go:141] libmachine: (addons-835623)       <readonly/>
	I0408 18:13:04.084220  149099 main.go:141] libmachine: (addons-835623)     </disk>
	I0408 18:13:04.084227  149099 main.go:141] libmachine: (addons-835623)     <disk type='file' device='disk'>
	I0408 18:13:04.084240  149099 main.go:141] libmachine: (addons-835623)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 18:13:04.084262  149099 main.go:141] libmachine: (addons-835623)       <source file='/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/addons-835623.rawdisk'/>
	I0408 18:13:04.084280  149099 main.go:141] libmachine: (addons-835623)       <target dev='hda' bus='virtio'/>
	I0408 18:13:04.084294  149099 main.go:141] libmachine: (addons-835623)     </disk>
	I0408 18:13:04.084307  149099 main.go:141] libmachine: (addons-835623)     <interface type='network'>
	I0408 18:13:04.084320  149099 main.go:141] libmachine: (addons-835623)       <source network='mk-addons-835623'/>
	I0408 18:13:04.084330  149099 main.go:141] libmachine: (addons-835623)       <model type='virtio'/>
	I0408 18:13:04.084338  149099 main.go:141] libmachine: (addons-835623)     </interface>
	I0408 18:13:04.084348  149099 main.go:141] libmachine: (addons-835623)     <interface type='network'>
	I0408 18:13:04.084365  149099 main.go:141] libmachine: (addons-835623)       <source network='default'/>
	I0408 18:13:04.084380  149099 main.go:141] libmachine: (addons-835623)       <model type='virtio'/>
	I0408 18:13:04.084392  149099 main.go:141] libmachine: (addons-835623)     </interface>
	I0408 18:13:04.084399  149099 main.go:141] libmachine: (addons-835623)     <serial type='pty'>
	I0408 18:13:04.084407  149099 main.go:141] libmachine: (addons-835623)       <target port='0'/>
	I0408 18:13:04.084415  149099 main.go:141] libmachine: (addons-835623)     </serial>
	I0408 18:13:04.084421  149099 main.go:141] libmachine: (addons-835623)     <console type='pty'>
	I0408 18:13:04.084426  149099 main.go:141] libmachine: (addons-835623)       <target type='serial' port='0'/>
	I0408 18:13:04.084434  149099 main.go:141] libmachine: (addons-835623)     </console>
	I0408 18:13:04.084438  149099 main.go:141] libmachine: (addons-835623)     <rng model='virtio'>
	I0408 18:13:04.084453  149099 main.go:141] libmachine: (addons-835623)       <backend model='random'>/dev/random</backend>
	I0408 18:13:04.084462  149099 main.go:141] libmachine: (addons-835623)     </rng>
	I0408 18:13:04.084467  149099 main.go:141] libmachine: (addons-835623)     
	I0408 18:13:04.084477  149099 main.go:141] libmachine: (addons-835623)     
	I0408 18:13:04.084482  149099 main.go:141] libmachine: (addons-835623)   </devices>
	I0408 18:13:04.084492  149099 main.go:141] libmachine: (addons-835623) </domain>
	I0408 18:13:04.084508  149099 main.go:141] libmachine: (addons-835623) 
	I0408 18:13:04.091603  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:0d:10:b5 in network default
	I0408 18:13:04.092397  149099 main.go:141] libmachine: (addons-835623) starting domain...
	I0408 18:13:04.092425  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:04.092431  149099 main.go:141] libmachine: (addons-835623) ensuring networks are active...
	I0408 18:13:04.093260  149099 main.go:141] libmachine: (addons-835623) Ensuring network default is active
	I0408 18:13:04.093566  149099 main.go:141] libmachine: (addons-835623) Ensuring network mk-addons-835623 is active
	I0408 18:13:04.094268  149099 main.go:141] libmachine: (addons-835623) getting domain XML...
	I0408 18:13:04.095067  149099 main.go:141] libmachine: (addons-835623) creating domain...
	I0408 18:13:05.660166  149099 main.go:141] libmachine: (addons-835623) waiting for IP...
	I0408 18:13:05.661164  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:05.661912  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:05.662049  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:05.661907  149122 retry.go:31] will retry after 251.445185ms: waiting for domain to come up
	I0408 18:13:05.915853  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:05.916306  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:05.916376  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:05.916303  149122 retry.go:31] will retry after 359.006339ms: waiting for domain to come up
	I0408 18:13:06.277204  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:06.277901  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:06.277928  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:06.277869  149122 retry.go:31] will retry after 450.554645ms: waiting for domain to come up
	I0408 18:13:06.730552  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:06.730922  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:06.730979  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:06.730917  149122 retry.go:31] will retry after 389.114287ms: waiting for domain to come up
	I0408 18:13:07.121639  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:07.122239  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:07.122261  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:07.122191  149122 retry.go:31] will retry after 574.034862ms: waiting for domain to come up
	I0408 18:13:07.697856  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:07.698403  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:07.698432  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:07.698372  149122 retry.go:31] will retry after 754.880472ms: waiting for domain to come up
	I0408 18:13:08.456492  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:08.457272  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:08.457332  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:08.457247  149122 retry.go:31] will retry after 1.132012211s: waiting for domain to come up
	I0408 18:13:09.590925  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:09.591506  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:09.591562  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:09.591465  149122 retry.go:31] will retry after 988.473798ms: waiting for domain to come up
	I0408 18:13:10.582035  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:10.582876  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:10.582936  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:10.582846  149122 retry.go:31] will retry after 1.361084784s: waiting for domain to come up
	I0408 18:13:11.946649  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:11.947684  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:11.947745  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:11.947590  149122 retry.go:31] will retry after 1.643154191s: waiting for domain to come up
	I0408 18:13:13.592512  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:13.593308  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:13.593379  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:13.593249  149122 retry.go:31] will retry after 2.417328542s: waiting for domain to come up
	I0408 18:13:16.014053  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:16.014427  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:16.014455  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:16.014394  149122 retry.go:31] will retry after 3.199818343s: waiting for domain to come up
	I0408 18:13:19.215648  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:19.216029  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:19.216065  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:19.216008  149122 retry.go:31] will retry after 3.46611702s: waiting for domain to come up
	I0408 18:13:22.686666  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:22.686969  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find current IP address of domain addons-835623 in network mk-addons-835623
	I0408 18:13:22.687012  149099 main.go:141] libmachine: (addons-835623) DBG | I0408 18:13:22.686969  149122 retry.go:31] will retry after 4.411410636s: waiting for domain to come up
	I0408 18:13:27.103755  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:27.104293  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has current primary IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:27.104314  149099 main.go:141] libmachine: (addons-835623) found domain IP: 192.168.39.89
	I0408 18:13:27.104330  149099 main.go:141] libmachine: (addons-835623) reserving static IP address...
	I0408 18:13:27.104662  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find host DHCP lease matching {name: "addons-835623", mac: "52:54:00:ed:af:33", ip: "192.168.39.89"} in network mk-addons-835623
	I0408 18:13:27.227699  149099 main.go:141] libmachine: (addons-835623) DBG | Getting to WaitForSSH function...
	I0408 18:13:27.227735  149099 main.go:141] libmachine: (addons-835623) reserved static IP address 192.168.39.89 for domain addons-835623
	I0408 18:13:27.227750  149099 main.go:141] libmachine: (addons-835623) waiting for SSH...
	I0408 18:13:27.229968  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:27.230315  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623
	I0408 18:13:27.230343  149099 main.go:141] libmachine: (addons-835623) DBG | unable to find defined IP address of network mk-addons-835623 interface with MAC address 52:54:00:ed:af:33
	I0408 18:13:27.230550  149099 main.go:141] libmachine: (addons-835623) DBG | Using SSH client type: external
	I0408 18:13:27.230577  149099 main.go:141] libmachine: (addons-835623) DBG | Using SSH private key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa (-rw-------)
	I0408 18:13:27.230616  149099 main.go:141] libmachine: (addons-835623) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 18:13:27.230636  149099 main.go:141] libmachine: (addons-835623) DBG | About to run SSH command:
	I0408 18:13:27.230653  149099 main.go:141] libmachine: (addons-835623) DBG | exit 0
	I0408 18:13:27.235063  149099 main.go:141] libmachine: (addons-835623) DBG | SSH cmd err, output: exit status 255: 
	I0408 18:13:27.235091  149099 main.go:141] libmachine: (addons-835623) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0408 18:13:27.235098  149099 main.go:141] libmachine: (addons-835623) DBG | command : exit 0
	I0408 18:13:27.235103  149099 main.go:141] libmachine: (addons-835623) DBG | err     : exit status 255
	I0408 18:13:27.235138  149099 main.go:141] libmachine: (addons-835623) DBG | output  : 
	I0408 18:13:30.235331  149099 main.go:141] libmachine: (addons-835623) DBG | Getting to WaitForSSH function...
	I0408 18:13:30.238120  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.238676  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:30.238702  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.238877  149099 main.go:141] libmachine: (addons-835623) DBG | Using SSH client type: external
	I0408 18:13:30.238910  149099 main.go:141] libmachine: (addons-835623) DBG | Using SSH private key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa (-rw-------)
	I0408 18:13:30.238930  149099 main.go:141] libmachine: (addons-835623) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 18:13:30.238969  149099 main.go:141] libmachine: (addons-835623) DBG | About to run SSH command:
	I0408 18:13:30.238982  149099 main.go:141] libmachine: (addons-835623) DBG | exit 0
	I0408 18:13:30.366033  149099 main.go:141] libmachine: (addons-835623) DBG | SSH cmd err, output: <nil>: 
	I0408 18:13:30.366283  149099 main.go:141] libmachine: (addons-835623) KVM machine creation complete
	I0408 18:13:30.366633  149099 main.go:141] libmachine: (addons-835623) Calling .GetConfigRaw
	I0408 18:13:30.367283  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:30.367504  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:30.367665  149099 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 18:13:30.367679  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:30.368976  149099 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 18:13:30.368995  149099 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 18:13:30.369002  149099 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 18:13:30.369030  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:30.371403  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.371719  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:30.371746  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.371899  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:30.372155  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:30.372320  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:30.372469  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:30.372624  149099 main.go:141] libmachine: Using SSH client type: native
	I0408 18:13:30.372859  149099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0408 18:13:30.372868  149099 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 18:13:30.485554  149099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 18:13:30.485577  149099 main.go:141] libmachine: Detecting the provisioner...
	I0408 18:13:30.485586  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:30.488554  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.488870  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:30.488902  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.489058  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:30.489364  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:30.489568  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:30.489770  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:30.489996  149099 main.go:141] libmachine: Using SSH client type: native
	I0408 18:13:30.490274  149099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0408 18:13:30.490288  149099 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 18:13:30.606872  149099 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 18:13:30.606986  149099 main.go:141] libmachine: found compatible host: buildroot
	I0408 18:13:30.607002  149099 main.go:141] libmachine: Provisioning with buildroot...
	I0408 18:13:30.607010  149099 main.go:141] libmachine: (addons-835623) Calling .GetMachineName
	I0408 18:13:30.607264  149099 buildroot.go:166] provisioning hostname "addons-835623"
	I0408 18:13:30.607290  149099 main.go:141] libmachine: (addons-835623) Calling .GetMachineName
	I0408 18:13:30.607444  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:30.610365  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.610721  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:30.610748  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.610988  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:30.611255  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:30.611468  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:30.611668  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:30.611846  149099 main.go:141] libmachine: Using SSH client type: native
	I0408 18:13:30.612065  149099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0408 18:13:30.612078  149099 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-835623 && echo "addons-835623" | sudo tee /etc/hostname
	I0408 18:13:30.739206  149099 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-835623
	
	I0408 18:13:30.739248  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:30.741864  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.742199  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:30.742233  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.742399  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:30.742614  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:30.742775  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:30.742916  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:30.743108  149099 main.go:141] libmachine: Using SSH client type: native
	I0408 18:13:30.743370  149099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0408 18:13:30.743398  149099 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-835623' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-835623/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-835623' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 18:13:30.862654  149099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 18:13:30.862706  149099 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20604-141129/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-141129/.minikube}
	I0408 18:13:30.862736  149099 buildroot.go:174] setting up certificates
	I0408 18:13:30.862784  149099 provision.go:84] configureAuth start
	I0408 18:13:30.862802  149099 main.go:141] libmachine: (addons-835623) Calling .GetMachineName
	I0408 18:13:30.863166  149099 main.go:141] libmachine: (addons-835623) Calling .GetIP
	I0408 18:13:30.865784  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.866245  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:30.866273  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.866405  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:30.868923  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.869261  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:30.869290  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:30.869404  149099 provision.go:143] copyHostCerts
	I0408 18:13:30.869489  149099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem (1082 bytes)
	I0408 18:13:30.869659  149099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem (1123 bytes)
	I0408 18:13:30.869758  149099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem (1679 bytes)
	I0408 18:13:30.869851  149099 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem org=jenkins.addons-835623 san=[127.0.0.1 192.168.39.89 addons-835623 localhost minikube]
	I0408 18:13:31.094642  149099 provision.go:177] copyRemoteCerts
	I0408 18:13:31.094749  149099 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 18:13:31.094804  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:31.097952  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.098488  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:31.098516  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.098776  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:31.099066  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:31.099251  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:31.099436  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:31.188759  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 18:13:31.214462  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 18:13:31.239445  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 18:13:31.262691  149099 provision.go:87] duration metric: took 399.887288ms to configureAuth
	I0408 18:13:31.262725  149099 buildroot.go:189] setting minikube options for container-runtime
	I0408 18:13:31.262937  149099 config.go:182] Loaded profile config "addons-835623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 18:13:31.263037  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:31.265729  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.266245  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:31.266277  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.266451  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:31.266658  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:31.266824  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:31.266953  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:31.267149  149099 main.go:141] libmachine: Using SSH client type: native
	I0408 18:13:31.267457  149099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0408 18:13:31.267478  149099 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 18:13:31.499697  149099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 18:13:31.499732  149099 main.go:141] libmachine: Checking connection to Docker...
	I0408 18:13:31.499745  149099 main.go:141] libmachine: (addons-835623) Calling .GetURL
	I0408 18:13:31.501346  149099 main.go:141] libmachine: (addons-835623) DBG | using libvirt version 6000000
	I0408 18:13:31.504153  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.504493  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:31.504517  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.504734  149099 main.go:141] libmachine: Docker is up and running!
	I0408 18:13:31.504756  149099 main.go:141] libmachine: Reticulating splines...
	I0408 18:13:31.504764  149099 client.go:171] duration metric: took 28.649988487s to LocalClient.Create
	I0408 18:13:31.504788  149099 start.go:167] duration metric: took 28.650060124s to libmachine.API.Create "addons-835623"
	I0408 18:13:31.504800  149099 start.go:293] postStartSetup for "addons-835623" (driver="kvm2")
	I0408 18:13:31.504808  149099 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 18:13:31.504827  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:31.505110  149099 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 18:13:31.505139  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:31.507487  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.507769  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:31.507796  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.507938  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:31.508147  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:31.508306  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:31.508447  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:31.596873  149099 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 18:13:31.601349  149099 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 18:13:31.601389  149099 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/addons for local assets ...
	I0408 18:13:31.601475  149099 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/files for local assets ...
	I0408 18:13:31.601500  149099 start.go:296] duration metric: took 96.694618ms for postStartSetup
	I0408 18:13:31.601540  149099 main.go:141] libmachine: (addons-835623) Calling .GetConfigRaw
	I0408 18:13:31.602139  149099 main.go:141] libmachine: (addons-835623) Calling .GetIP
	I0408 18:13:31.605658  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.606128  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:31.606159  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.606462  149099 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/config.json ...
	I0408 18:13:31.606743  149099 start.go:128] duration metric: took 28.773503593s to createHost
	I0408 18:13:31.606778  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:31.609405  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.609734  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:31.609757  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.610108  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:31.610363  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:31.610551  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:31.610721  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:31.610895  149099 main.go:141] libmachine: Using SSH client type: native
	I0408 18:13:31.611115  149099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0408 18:13:31.611125  149099 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 18:13:31.726561  149099 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744136011.704952755
	
	I0408 18:13:31.726590  149099 fix.go:216] guest clock: 1744136011.704952755
	I0408 18:13:31.726601  149099 fix.go:229] Guest: 2025-04-08 18:13:31.704952755 +0000 UTC Remote: 2025-04-08 18:13:31.606761753 +0000 UTC m=+28.892733701 (delta=98.191002ms)
	I0408 18:13:31.726634  149099 fix.go:200] guest clock delta is within tolerance: 98.191002ms
	I0408 18:13:31.726642  149099 start.go:83] releasing machines lock for "addons-835623", held for 28.893520316s
	I0408 18:13:31.726671  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:31.726999  149099 main.go:141] libmachine: (addons-835623) Calling .GetIP
	I0408 18:13:31.730115  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.730581  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:31.730614  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.730828  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:31.731344  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:31.731518  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:31.731596  149099 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 18:13:31.731684  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:31.731705  149099 ssh_runner.go:195] Run: cat /version.json
	I0408 18:13:31.731741  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:31.734493  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.734564  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.734830  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:31.734900  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:31.734930  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.734952  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:31.735176  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:31.735302  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:31.735382  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:31.735479  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:31.735553  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:31.735612  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:31.735680  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:31.735711  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:31.838290  149099 ssh_runner.go:195] Run: systemctl --version
	I0408 18:13:31.844835  149099 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 18:13:32.001066  149099 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 18:13:32.007127  149099 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 18:13:32.007205  149099 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 18:13:32.023060  149099 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 18:13:32.023089  149099 start.go:495] detecting cgroup driver to use...
	I0408 18:13:32.023181  149099 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 18:13:32.039246  149099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 18:13:32.053360  149099 docker.go:217] disabling cri-docker service (if available) ...
	I0408 18:13:32.053435  149099 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 18:13:32.066880  149099 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 18:13:32.080278  149099 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 18:13:32.194010  149099 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 18:13:32.348620  149099 docker.go:233] disabling docker service ...
	I0408 18:13:32.348697  149099 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 18:13:32.364404  149099 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 18:13:32.377258  149099 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 18:13:32.514509  149099 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 18:13:32.650221  149099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 18:13:32.664575  149099 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 18:13:32.683576  149099 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 18:13:32.683717  149099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 18:13:32.694197  149099 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 18:13:32.694273  149099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 18:13:32.704415  149099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 18:13:32.715013  149099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 18:13:32.725421  149099 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 18:13:32.735747  149099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 18:13:32.745846  149099 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 18:13:32.762581  149099 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 18:13:32.772556  149099 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 18:13:32.781704  149099 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 18:13:32.781771  149099 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 18:13:32.794982  149099 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 18:13:32.804300  149099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 18:13:32.917812  149099 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 18:13:33.004959  149099 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 18:13:33.005083  149099 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 18:13:33.009480  149099 start.go:563] Will wait 60s for crictl version
	I0408 18:13:33.009546  149099 ssh_runner.go:195] Run: which crictl
	I0408 18:13:33.013066  149099 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 18:13:33.049899  149099 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 18:13:33.050054  149099 ssh_runner.go:195] Run: crio --version
	I0408 18:13:33.077868  149099 ssh_runner.go:195] Run: crio --version
	I0408 18:13:33.107772  149099 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 18:13:33.109772  149099 main.go:141] libmachine: (addons-835623) Calling .GetIP
	I0408 18:13:33.112506  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:33.112904  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:33.112946  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:33.113154  149099 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 18:13:33.117409  149099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 18:13:33.130601  149099 kubeadm.go:883] updating cluster {Name:addons-835623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-835623 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 18:13:33.130747  149099 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 18:13:33.130802  149099 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 18:13:33.161989  149099 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0408 18:13:33.162072  149099 ssh_runner.go:195] Run: which lz4
	I0408 18:13:33.165810  149099 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 18:13:33.169743  149099 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 18:13:33.169778  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0408 18:13:34.449734  149099 crio.go:462] duration metric: took 1.28395166s to copy over tarball
	I0408 18:13:34.449815  149099 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 18:13:36.716550  149099 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.266703985s)
	I0408 18:13:36.716583  149099 crio.go:469] duration metric: took 2.266818537s to extract the tarball
	I0408 18:13:36.716595  149099 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 18:13:36.753729  149099 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 18:13:36.796459  149099 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 18:13:36.796487  149099 cache_images.go:84] Images are preloaded, skipping loading
	I0408 18:13:36.796496  149099 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.32.2 crio true true} ...
	I0408 18:13:36.796598  149099 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-835623 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-835623 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 18:13:36.796663  149099 ssh_runner.go:195] Run: crio config
	I0408 18:13:36.844470  149099 cni.go:84] Creating CNI manager for ""
	I0408 18:13:36.844494  149099 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 18:13:36.844505  149099 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 18:13:36.844526  149099 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-835623 NodeName:addons-835623 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 18:13:36.844704  149099 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-835623"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.89"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 18:13:36.844794  149099 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 18:13:36.854846  149099 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 18:13:36.854918  149099 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 18:13:36.864396  149099 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0408 18:13:36.881299  149099 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 18:13:36.897283  149099 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0408 18:13:36.913795  149099 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0408 18:13:36.917493  149099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 18:13:36.929434  149099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 18:13:37.045350  149099 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 18:13:37.061685  149099 certs.go:68] Setting up /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623 for IP: 192.168.39.89
	I0408 18:13:37.061714  149099 certs.go:194] generating shared ca certs ...
	I0408 18:13:37.061738  149099 certs.go:226] acquiring lock for ca certs: {Name:mkd37ce74a5e6f5f5300314397402f7d571fc230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:37.061939  149099 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key
	I0408 18:13:37.450801  149099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt ...
	I0408 18:13:37.450837  149099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt: {Name:mk7dc9638cd483a31512f4dfcd6024a0e52497e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:37.451021  149099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key ...
	I0408 18:13:37.451033  149099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key: {Name:mkdd62627505be663192cb4693d31df365a63718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:37.451109  149099 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key
	I0408 18:13:37.808252  149099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.crt ...
	I0408 18:13:37.808283  149099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.crt: {Name:mk770c0f1881593a42f523821e071ab47b691c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:37.808450  149099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key ...
	I0408 18:13:37.808462  149099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key: {Name:mk3603ed6473129f8ba983ce6a4b627df7b6bd3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:37.808530  149099 certs.go:256] generating profile certs ...
	I0408 18:13:37.808583  149099 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.key
	I0408 18:13:37.808611  149099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt with IP's: []
	I0408 18:13:37.948803  149099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt ...
	I0408 18:13:37.948839  149099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: {Name:mk53021b6e55924d178b89972fde63c5b4c09521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:37.949001  149099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.key ...
	I0408 18:13:37.949012  149099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.key: {Name:mk2786a185925ddfbecb91c0ea2ddf0bfb376746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:37.949081  149099 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/apiserver.key.401eac40
	I0408 18:13:37.949097  149099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/apiserver.crt.401eac40 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.89]
	I0408 18:13:38.060812  149099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/apiserver.crt.401eac40 ...
	I0408 18:13:38.060849  149099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/apiserver.crt.401eac40: {Name:mk8f3db9c2acf46b015c4b0c3fcde81fb5975148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:38.061046  149099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/apiserver.key.401eac40 ...
	I0408 18:13:38.061070  149099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/apiserver.key.401eac40: {Name:mkf415efda6076010fce92e1814833a70f0d167b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:38.061171  149099 certs.go:381] copying /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/apiserver.crt.401eac40 -> /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/apiserver.crt
	I0408 18:13:38.061251  149099 certs.go:385] copying /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/apiserver.key.401eac40 -> /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/apiserver.key
	I0408 18:13:38.061296  149099 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/proxy-client.key
	I0408 18:13:38.061314  149099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/proxy-client.crt with IP's: []
	I0408 18:13:38.118152  149099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/proxy-client.crt ...
	I0408 18:13:38.118185  149099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/proxy-client.crt: {Name:mk288b7db06082b5fd4c516f58932bd559d79ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:38.119081  149099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/proxy-client.key ...
	I0408 18:13:38.119101  149099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/proxy-client.key: {Name:mk51e9595556bdcd0ee690df6f33cb48cd9cb531 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:38.119288  149099 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 18:13:38.119323  149099 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem (1082 bytes)
	I0408 18:13:38.119346  149099 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem (1123 bytes)
	I0408 18:13:38.119369  149099 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem (1679 bytes)
	I0408 18:13:38.120054  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 18:13:38.152553  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 18:13:38.178346  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 18:13:38.202607  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 18:13:38.228291  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0408 18:13:38.254491  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 18:13:38.279524  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 18:13:38.305947  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 18:13:38.332009  149099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 18:13:38.355874  149099 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 18:13:38.372767  149099 ssh_runner.go:195] Run: openssl version
	I0408 18:13:38.379376  149099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 18:13:38.391778  149099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 18:13:38.397033  149099 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0408 18:13:38.397103  149099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 18:13:38.403711  149099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 18:13:38.416749  149099 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 18:13:38.421589  149099 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 18:13:38.421653  149099 kubeadm.go:392] StartCluster: {Name:addons-835623 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-835623 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:13:38.421744  149099 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 18:13:38.421899  149099 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 18:13:38.461951  149099 cri.go:89] found id: ""
	I0408 18:13:38.462054  149099 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 18:13:38.473575  149099 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 18:13:38.484616  149099 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 18:13:38.495649  149099 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 18:13:38.495672  149099 kubeadm.go:157] found existing configuration files:
	
	I0408 18:13:38.495732  149099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 18:13:38.505863  149099 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 18:13:38.505951  149099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 18:13:38.516459  149099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 18:13:38.526075  149099 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 18:13:38.526147  149099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 18:13:38.536566  149099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 18:13:38.545910  149099 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 18:13:38.545981  149099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 18:13:38.555940  149099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 18:13:38.565253  149099 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 18:13:38.565327  149099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 18:13:38.575264  149099 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 18:13:38.624139  149099 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0408 18:13:38.624250  149099 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 18:13:38.731610  149099 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 18:13:38.731743  149099 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 18:13:38.731892  149099 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0408 18:13:38.740994  149099 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 18:13:38.894625  149099 out.go:235]   - Generating certificates and keys ...
	I0408 18:13:38.894795  149099 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 18:13:38.894885  149099 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 18:13:38.895033  149099 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 18:13:38.994838  149099 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0408 18:13:39.194144  149099 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0408 18:13:39.300144  149099 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0408 18:13:39.490814  149099 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0408 18:13:39.490966  149099 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-835623 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
	I0408 18:13:39.593644  149099 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0408 18:13:39.594007  149099 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-835623 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
	I0408 18:13:39.790683  149099 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 18:13:39.966672  149099 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 18:13:40.240789  149099 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0408 18:13:40.240888  149099 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 18:13:40.504979  149099 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 18:13:40.590752  149099 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 18:13:40.914030  149099 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 18:13:40.991059  149099 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 18:13:41.077287  149099 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 18:13:41.077969  149099 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 18:13:41.080478  149099 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 18:13:41.082720  149099 out.go:235]   - Booting up control plane ...
	I0408 18:13:41.082857  149099 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 18:13:41.082968  149099 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 18:13:41.083079  149099 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 18:13:41.099292  149099 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 18:13:41.107447  149099 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 18:13:41.107532  149099 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 18:13:41.251462  149099 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0408 18:13:41.251643  149099 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0408 18:13:41.758498  149099 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 507.653986ms
	I0408 18:13:41.758631  149099 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0408 18:13:46.758145  149099 kubeadm.go:310] [api-check] The API server is healthy after 5.00201489s
	I0408 18:13:46.772390  149099 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 18:13:46.790319  149099 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 18:13:46.827204  149099 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 18:13:46.827392  149099 kubeadm.go:310] [mark-control-plane] Marking the node addons-835623 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 18:13:46.840311  149099 kubeadm.go:310] [bootstrap-token] Using token: 4z9u63.szg9b1noch3kln6f
	I0408 18:13:46.842133  149099 out.go:235]   - Configuring RBAC rules ...
	I0408 18:13:46.842289  149099 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 18:13:46.848452  149099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 18:13:46.856223  149099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 18:13:46.860843  149099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 18:13:46.870183  149099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 18:13:46.874868  149099 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 18:13:47.166025  149099 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 18:13:47.603073  149099 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0408 18:13:48.165155  149099 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0408 18:13:48.166097  149099 kubeadm.go:310] 
	I0408 18:13:48.166192  149099 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0408 18:13:48.166205  149099 kubeadm.go:310] 
	I0408 18:13:48.166342  149099 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0408 18:13:48.166364  149099 kubeadm.go:310] 
	I0408 18:13:48.166401  149099 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0408 18:13:48.166500  149099 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 18:13:48.166593  149099 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 18:13:48.166613  149099 kubeadm.go:310] 
	I0408 18:13:48.166688  149099 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0408 18:13:48.166699  149099 kubeadm.go:310] 
	I0408 18:13:48.166762  149099 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 18:13:48.166772  149099 kubeadm.go:310] 
	I0408 18:13:48.166834  149099 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0408 18:13:48.166940  149099 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 18:13:48.167032  149099 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 18:13:48.167043  149099 kubeadm.go:310] 
	I0408 18:13:48.167173  149099 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 18:13:48.167294  149099 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0408 18:13:48.167320  149099 kubeadm.go:310] 
	I0408 18:13:48.167398  149099 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4z9u63.szg9b1noch3kln6f \
	I0408 18:13:48.167487  149099 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1949abeebf79f66b11a51c3c10efedc1348ce8b048adc726b3b33c6e58c53853 \
	I0408 18:13:48.167510  149099 kubeadm.go:310] 	--control-plane 
	I0408 18:13:48.167516  149099 kubeadm.go:310] 
	I0408 18:13:48.167588  149099 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0408 18:13:48.167595  149099 kubeadm.go:310] 
	I0408 18:13:48.167692  149099 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4z9u63.szg9b1noch3kln6f \
	I0408 18:13:48.167818  149099 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1949abeebf79f66b11a51c3c10efedc1348ce8b048adc726b3b33c6e58c53853 
	I0408 18:13:48.168914  149099 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 18:13:48.169009  149099 cni.go:84] Creating CNI manager for ""
	I0408 18:13:48.169028  149099 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 18:13:48.170760  149099 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 18:13:48.172227  149099 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 18:13:48.182815  149099 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 18:13:48.200993  149099 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 18:13:48.201119  149099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:13:48.201185  149099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-835623 minikube.k8s.io/updated_at=2025_04_08T18_13_48_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=00fec7ad00298ce3ccd71a2d57a7f829f082fec8 minikube.k8s.io/name=addons-835623 minikube.k8s.io/primary=true
	I0408 18:13:48.230236  149099 ops.go:34] apiserver oom_adj: -16
	I0408 18:13:48.317073  149099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:13:48.817274  149099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:13:49.317966  149099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:13:49.817186  149099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:13:50.317251  149099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:13:50.817695  149099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:13:51.317968  149099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:13:51.817995  149099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:13:52.317796  149099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 18:13:52.402565  149099 kubeadm.go:1113] duration metric: took 4.201552037s to wait for elevateKubeSystemPrivileges
	I0408 18:13:52.402607  149099 kubeadm.go:394] duration metric: took 13.98096054s to StartCluster
	I0408 18:13:52.402628  149099 settings.go:142] acquiring lock: {Name:mk8d530f6b8ad949177759460b330a3d74710125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:52.402756  149099 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 18:13:52.403108  149099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/kubeconfig: {Name:mk9a380edcf1115627e95ec52acade4ebe48201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:52.403303  149099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0408 18:13:52.403329  149099 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 18:13:52.403402  149099 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0408 18:13:52.403512  149099 addons.go:69] Setting yakd=true in profile "addons-835623"
	I0408 18:13:52.403524  149099 addons.go:69] Setting ingress-dns=true in profile "addons-835623"
	I0408 18:13:52.403538  149099 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-835623"
	I0408 18:13:52.403556  149099 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-835623"
	I0408 18:13:52.403549  149099 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-835623"
	I0408 18:13:52.403570  149099 addons.go:69] Setting inspektor-gadget=true in profile "addons-835623"
	I0408 18:13:52.403579  149099 addons.go:69] Setting default-storageclass=true in profile "addons-835623"
	I0408 18:13:52.403576  149099 addons.go:69] Setting gcp-auth=true in profile "addons-835623"
	I0408 18:13:52.403593  149099 addons.go:238] Setting addon inspektor-gadget=true in "addons-835623"
	I0408 18:13:52.403596  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.403586  149099 addons.go:69] Setting cloud-spanner=true in profile "addons-835623"
	I0408 18:13:52.403607  149099 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-835623"
	I0408 18:13:52.403618  149099 addons.go:69] Setting volcano=true in profile "addons-835623"
	I0408 18:13:52.403623  149099 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-835623"
	I0408 18:13:52.403627  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.403632  149099 addons.go:69] Setting registry=true in profile "addons-835623"
	I0408 18:13:52.403640  149099 addons.go:238] Setting addon volcano=true in "addons-835623"
	I0408 18:13:52.403642  149099 addons.go:69] Setting volumesnapshots=true in profile "addons-835623"
	I0408 18:13:52.403649  149099 addons.go:238] Setting addon registry=true in "addons-835623"
	I0408 18:13:52.403652  149099 addons.go:238] Setting addon volumesnapshots=true in "addons-835623"
	I0408 18:13:52.403672  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.403681  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.403556  149099 addons.go:238] Setting addon ingress-dns=true in "addons-835623"
	I0408 18:13:52.403768  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.403623  149099 addons.go:69] Setting metrics-server=true in profile "addons-835623"
	I0408 18:13:52.403829  149099 addons.go:238] Setting addon metrics-server=true in "addons-835623"
	I0408 18:13:52.403850  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.403636  149099 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-835623"
	I0408 18:13:52.403565  149099 addons.go:238] Setting addon yakd=true in "addons-835623"
	I0408 18:13:52.404065  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.404069  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.404077  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.404077  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.404093  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.404116  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.404124  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.404151  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.404064  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.403673  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.404195  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.404206  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.404221  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.404248  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.404266  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.404283  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.403611  149099 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-835623"
	I0408 18:13:52.403614  149099 addons.go:238] Setting addon cloud-spanner=true in "addons-835623"
	I0408 18:13:52.403618  149099 addons.go:69] Setting storage-provisioner=true in profile "addons-835623"
	I0408 18:13:52.404371  149099 addons.go:238] Setting addon storage-provisioner=true in "addons-835623"
	I0408 18:13:52.403610  149099 mustload.go:65] Loading cluster: addons-835623
	I0408 18:13:52.403562  149099 addons.go:69] Setting ingress=true in profile "addons-835623"
	I0408 18:13:52.403528  149099 config.go:182] Loaded profile config "addons-835623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 18:13:52.404392  149099 addons.go:238] Setting addon ingress=true in "addons-835623"
	I0408 18:13:52.403628  149099 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-835623"
	I0408 18:13:52.404500  149099 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-835623"
	I0408 18:13:52.404548  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.404570  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.404615  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.404681  149099 config.go:182] Loaded profile config "addons-835623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 18:13:52.404867  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.404890  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.404994  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.405004  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.405016  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.405043  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.404595  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.405058  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.405079  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.405017  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.405102  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.405124  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.405218  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.405032  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.405323  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.405509  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.405896  149099 out.go:177] * Verifying Kubernetes components...
	I0408 18:13:52.406387  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.406415  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.410137  149099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 18:13:52.430012  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36785
	I0408 18:13:52.430052  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37323
	I0408 18:13:52.430025  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38073
	I0408 18:13:52.434897  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.434969  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.442373  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46127
	I0408 18:13:52.442372  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45597
	I0408 18:13:52.443610  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.443630  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.443697  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.443712  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.443745  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.444298  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.444321  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.444450  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.444460  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.444565  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.444575  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.444701  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.444712  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.444763  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.444858  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.444867  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.445315  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.445327  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.445334  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.445345  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.445494  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.445897  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.446141  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.446166  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.446183  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.446789  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.446828  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.446855  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.446833  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.478052  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37067
	I0408 18:13:52.478357  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40073
	I0408 18:13:52.479624  149099 addons.go:238] Setting addon default-storageclass=true in "addons-835623"
	I0408 18:13:52.479669  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.480199  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.480262  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.481389  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42187
	I0408 18:13:52.483880  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.483892  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46841
	I0408 18:13:52.484085  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.484409  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.484516  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.484731  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.484756  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.485279  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.485473  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.485488  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.485653  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.485665  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.486017  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.486097  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.486115  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.486605  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.486635  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.486966  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.487032  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.487350  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.487594  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.488678  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.489628  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I0408 18:13:52.490559  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.490847  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.491229  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.491254  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.491950  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.491973  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.492087  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44509
	I0408 18:13:52.492224  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42831
	I0408 18:13:52.493211  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.493315  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.493434  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.494078  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.494226  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35395
	I0408 18:13:52.494268  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.494289  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.494400  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.494415  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.494686  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.494824  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.495045  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.495524  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.495548  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.495986  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.496804  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.496857  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.497126  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45637
	I0408 18:13:52.497803  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I0408 18:13:52.498306  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.498686  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.499530  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.499593  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.499801  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.499816  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.500325  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.500612  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.500765  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0408 18:13:52.522550  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.522590  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37125
	I0408 18:13:52.523267  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I0408 18:13:52.523278  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.523168  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.523351  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0408 18:13:52.523394  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.523501  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.523560  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.523648  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36385
	I0408 18:13:52.523720  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.523735  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.523847  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.523946  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.523999  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.524162  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.524300  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.525408  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.524375  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.525483  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.524392  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.525519  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.524409  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0408 18:13:52.524418  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45429
	I0408 18:13:52.524504  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.524845  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.525882  149099 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0408 18:13:52.526019  149099 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0408 18:13:52.526437  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.526458  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.526974  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.527014  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.527054  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.527094  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.527264  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.527701  149099 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0408 18:13:52.527723  149099 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0408 18:13:52.527747  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.527807  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.527824  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.527832  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.527883  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.527886  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.527901  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.528201  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.528247  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.528295  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.528720  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.528966  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.529014  149099 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 18:13:52.529024  149099 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 18:13:52.529039  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.529180  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.529244  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.529446  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.529466  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.529477  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.529784  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.529805  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.529909  149099 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0408 18:13:52.530165  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.530181  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.530637  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.531479  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.531520  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.531941  149099 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0408 18:13:52.531961  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0408 18:13:52.531983  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.537990  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.538149  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.538269  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.538326  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.538347  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.538430  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.538500  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.538545  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.538864  149099 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-835623"
	I0408 18:13:52.538917  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:52.538973  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.539008  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.539349  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.539406  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.539886  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.539994  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.540038  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.540102  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.540167  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.540173  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.540188  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.540207  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.540230  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:52.540320  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.540444  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:52.540897  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.541096  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:52.541226  149099 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0408 18:13:52.541241  149099 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0408 18:13:52.542959  149099 out.go:177]   - Using image docker.io/registry:2.8.3
	I0408 18:13:52.542960  149099 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0408 18:13:52.543088  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0408 18:13:52.543115  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.544598  149099 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0408 18:13:52.544619  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0408 18:13:52.544646  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.545683  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0408 18:13:52.547171  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.547314  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46569
	I0408 18:13:52.547701  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45907
	I0408 18:13:52.548012  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.548042  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.548126  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.548707  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.548775  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.549233  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.549400  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.549413  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.549511  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.549556  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.549568  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.549801  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.549989  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.550048  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.550598  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.550718  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.550797  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.550816  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.550850  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
	I0408 18:13:52.551397  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.551692  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.552292  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.552677  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.552801  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.553617  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.553860  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.553918  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.554089  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.554152  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:52.554556  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.554770  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:52.555268  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.555333  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.555424  149099 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0408 18:13:52.555653  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.555784  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.556172  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.556820  149099 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0408 18:13:52.556851  149099 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0408 18:13:52.556881  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.556826  149099 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 18:13:52.557642  149099 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0408 18:13:52.558566  149099 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 18:13:52.558585  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 18:13:52.558606  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.560735  149099 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0408 18:13:52.561604  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.562212  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.562243  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.563103  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.563628  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.563658  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.563702  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.564002  149099 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0408 18:13:52.564279  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.564328  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.564529  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.564580  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.564818  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.564882  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:52.565324  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:52.567239  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.567329  149099 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0408 18:13:52.568695  149099 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0408 18:13:52.569738  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0408 18:13:52.570334  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46667
	I0408 18:13:52.570505  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.570727  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.570969  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.570989  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.571209  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.571232  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.571390  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.571475  149099 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0408 18:13:52.571601  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.571656  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.572375  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.573893  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.574317  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.574820  149099 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0408 18:13:52.575868  149099 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0408 18:13:52.576202  149099 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0408 18:13:52.577499  149099 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0408 18:13:52.577517  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0408 18:13:52.577540  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.577634  149099 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0408 18:13:52.577731  149099 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0408 18:13:52.577742  149099 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0408 18:13:52.577760  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.579371  149099 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0408 18:13:52.579397  149099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0408 18:13:52.579421  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.582900  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.582969  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39903
	I0408 18:13:52.583419  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.583665  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.583688  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.584356  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.584400  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.584414  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.584550  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.584663  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.584923  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.585140  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.585391  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:52.586037  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.586057  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.586130  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.586143  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.586251  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.586288  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.586570  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.586641  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39907
	I0408 18:13:52.586684  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.586870  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.586955  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0408 18:13:52.587045  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.587067  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.587557  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.587639  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.587641  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.588012  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.588031  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.588386  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:52.588419  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.588484  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:52.588982  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.588998  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.589137  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.589273  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I0408 18:13:52.589475  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.589681  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.589874  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.590621  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.591215  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.591239  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.592020  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.592048  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.592248  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.592292  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:52.592304  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:52.592329  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.592560  149099 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 18:13:52.592576  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:52.592579  149099 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 18:13:52.592592  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:52.592602  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:52.592610  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:52.592610  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.592616  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:52.592952  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:52.592980  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:52.592990  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	W0408 18:13:52.593090  149099 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0408 18:13:52.593776  149099 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0408 18:13:52.594886  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43713
	I0408 18:13:52.594934  149099 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0408 18:13:52.595890  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.596003  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.596558  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.596732  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.596745  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.597028  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.597057  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.597258  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.597324  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.597380  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.597762  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:52.597794  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:52.597943  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.597994  149099 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0408 18:13:52.598027  149099 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.31
	I0408 18:13:52.598192  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:52.599430  149099 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0408 18:13:52.599450  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0408 18:13:52.599476  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.599500  149099 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0408 18:13:52.599510  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0408 18:13:52.599524  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.603140  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.603591  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.603613  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.603632  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.603801  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.603997  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.604132  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.604182  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.604195  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.604372  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.604372  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:52.604545  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.604695  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.604847  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	W0408 18:13:52.614760  149099 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33198->192.168.39.89:22: read: connection reset by peer
	I0408 18:13:52.614804  149099 retry.go:31] will retry after 317.661358ms: ssh: handshake failed: read tcp 192.168.39.1:33198->192.168.39.89:22: read: connection reset by peer
	W0408 18:13:52.614861  149099 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33214->192.168.39.89:22: read: connection reset by peer
	I0408 18:13:52.614869  149099 retry.go:31] will retry after 263.982201ms: ssh: handshake failed: read tcp 192.168.39.1:33214->192.168.39.89:22: read: connection reset by peer
	I0408 18:13:52.619079  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I0408 18:13:52.619482  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:52.620025  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:52.620043  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:52.620421  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:52.620626  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:52.622158  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:52.624041  149099 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0408 18:13:52.625438  149099 out.go:177]   - Using image docker.io/busybox:stable
	I0408 18:13:52.627189  149099 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0408 18:13:52.627216  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0408 18:13:52.627242  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:52.630668  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.631118  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:52.631147  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:52.631386  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:52.631623  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:52.631801  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:52.631971  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	W0408 18:13:52.633778  149099 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33222->192.168.39.89:22: read: connection reset by peer
	I0408 18:13:52.633804  149099 retry.go:31] will retry after 264.276135ms: ssh: handshake failed: read tcp 192.168.39.1:33222->192.168.39.89:22: read: connection reset by peer
	I0408 18:13:52.697988  149099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0408 18:13:52.708601  149099 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 18:13:52.835265  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0408 18:13:52.847420  149099 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0408 18:13:52.847440  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0408 18:13:52.864854  149099 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 18:13:52.864876  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0408 18:13:52.886123  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0408 18:13:52.921151  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 18:13:52.949931  149099 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0408 18:13:52.949961  149099 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0408 18:13:52.957109  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0408 18:13:52.961157  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0408 18:13:52.972927  149099 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0408 18:13:52.972961  149099 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0408 18:13:52.990246  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 18:13:53.017872  149099 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0408 18:13:53.017918  149099 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0408 18:13:53.032925  149099 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0408 18:13:53.032956  149099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0408 18:13:53.060685  149099 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 18:13:53.060715  149099 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 18:13:53.110807  149099 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0408 18:13:53.110845  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0408 18:13:53.137424  149099 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0408 18:13:53.137456  149099 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0408 18:13:53.192032  149099 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 18:13:53.192062  149099 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 18:13:53.221433  149099 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0408 18:13:53.221460  149099 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0408 18:13:53.237715  149099 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0408 18:13:53.237742  149099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0408 18:13:53.257215  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0408 18:13:53.270604  149099 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0408 18:13:53.270633  149099 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0408 18:13:53.303487  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0408 18:13:53.335696  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0408 18:13:53.399485  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 18:13:53.459976  149099 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0408 18:13:53.460001  149099 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0408 18:13:53.482666  149099 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0408 18:13:53.482705  149099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0408 18:13:53.493636  149099 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0408 18:13:53.493660  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0408 18:13:53.510849  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0408 18:13:53.759786  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0408 18:13:53.784704  149099 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0408 18:13:53.784729  149099 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0408 18:13:53.836951  149099 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0408 18:13:53.836990  149099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0408 18:13:54.052816  149099 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 18:13:54.052840  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0408 18:13:54.155596  149099 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0408 18:13:54.155622  149099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0408 18:13:54.392239  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 18:13:54.421793  149099 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0408 18:13:54.421821  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0408 18:13:54.478838  149099 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.780800707s)
	I0408 18:13:54.478891  149099 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0408 18:13:54.478890  149099 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.770249095s)
	I0408 18:13:54.480023  149099 node_ready.go:35] waiting up to 6m0s for node "addons-835623" to be "Ready" ...
	I0408 18:13:54.483396  149099 node_ready.go:49] node "addons-835623" has status "Ready":"True"
	I0408 18:13:54.483425  149099 node_ready.go:38] duration metric: took 3.371694ms for node "addons-835623" to be "Ready" ...
	I0408 18:13:54.483438  149099 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 18:13:54.498153  149099 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-dbltp" in "kube-system" namespace to be "Ready" ...
	I0408 18:13:54.730736  149099 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0408 18:13:54.730762  149099 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0408 18:13:54.985546  149099 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-835623" context rescaled to 1 replicas
	I0408 18:13:55.034731  149099 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0408 18:13:55.034792  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0408 18:13:55.202833  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.316668748s)
	I0408 18:13:55.202871  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.281693005s)
	I0408 18:13:55.202895  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:55.202908  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:55.202895  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:55.202919  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:55.203049  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.367746345s)
	I0408 18:13:55.203089  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:55.203101  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:55.203294  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:55.203323  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:55.203330  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:55.203336  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:55.203355  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:55.203371  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:55.203510  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:55.203523  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:55.203539  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:55.203549  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:55.203591  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:55.203622  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:55.203638  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:55.203655  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:55.203667  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:55.203671  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:55.203767  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:55.203910  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:55.203966  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:55.203982  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:55.203995  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:55.204020  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:55.205560  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:55.205576  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:55.224690  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:55.224711  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:55.224998  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:55.225013  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:55.225028  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:55.338377  149099 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0408 18:13:55.338408  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0408 18:13:55.638768  149099 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0408 18:13:55.638809  149099 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0408 18:13:55.803801  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0408 18:13:56.619280  149099 pod_ready.go:103] pod "coredns-668d6bf9bc-dbltp" in "kube-system" namespace has status "Ready":"False"
	I0408 18:13:58.543544  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.58639197s)
	I0408 18:13:58.543598  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:58.543609  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:58.543605  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.553333345s)
	I0408 18:13:58.543630  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:58.543639  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:58.543556  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.582371328s)
	I0408 18:13:58.543708  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.286451175s)
	I0408 18:13:58.543789  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:58.543819  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:58.543716  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:58.543968  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:58.544017  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:58.544062  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:58.544075  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:58.544083  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:58.544095  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:58.544105  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:58.544113  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:58.544125  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:58.544174  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:58.544242  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:58.544248  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:58.544259  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:58.544266  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:58.544267  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:58.544276  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:58.544318  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:58.544353  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:58.544415  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:58.544439  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:58.544459  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:13:58.544490  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:13:58.544615  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:58.544647  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:58.544670  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:58.544678  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:58.544721  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:13:58.544794  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:58.544811  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:58.544823  149099 addons.go:479] Verifying addon registry=true in "addons-835623"
	I0408 18:13:58.546135  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:58.546154  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:58.546259  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:13:58.546273  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:13:58.546859  149099 out.go:177] * Verifying registry addon...
	I0408 18:13:58.548699  149099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0408 18:13:58.609106  149099 pod_ready.go:93] pod "coredns-668d6bf9bc-dbltp" in "kube-system" namespace has status "Ready":"True"
	I0408 18:13:58.609139  149099 pod_ready.go:82] duration metric: took 4.110937372s for pod "coredns-668d6bf9bc-dbltp" in "kube-system" namespace to be "Ready" ...
	I0408 18:13:58.609150  149099 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-xk7t6" in "kube-system" namespace to be "Ready" ...
	I0408 18:13:58.627506  149099 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0408 18:13:58.627538  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:13:59.170011  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:13:59.420968  149099 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0408 18:13:59.421023  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:59.424281  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:59.424769  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:59.424804  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:59.424993  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:59.425215  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:59.425408  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:59.425551  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:13:59.573643  149099 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0408 18:13:59.577242  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:13:59.683915  149099 addons.go:238] Setting addon gcp-auth=true in "addons-835623"
	I0408 18:13:59.683982  149099 host.go:66] Checking if "addons-835623" exists ...
	I0408 18:13:59.684311  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:59.684346  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:59.700973  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35321
	I0408 18:13:59.701534  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:59.702124  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:59.702158  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:59.702638  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:59.703128  149099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:13:59.703157  149099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:13:59.719999  149099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33693
	I0408 18:13:59.720625  149099 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:13:59.721203  149099 main.go:141] libmachine: Using API Version  1
	I0408 18:13:59.721231  149099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:13:59.721644  149099 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:13:59.721903  149099 main.go:141] libmachine: (addons-835623) Calling .GetState
	I0408 18:13:59.723887  149099 main.go:141] libmachine: (addons-835623) Calling .DriverName
	I0408 18:13:59.724238  149099 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0408 18:13:59.724267  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHHostname
	I0408 18:13:59.727439  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:59.727945  149099 main.go:141] libmachine: (addons-835623) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:af:33", ip: ""} in network mk-addons-835623: {Iface:virbr1 ExpiryTime:2025-04-08 19:13:18 +0000 UTC Type:0 Mac:52:54:00:ed:af:33 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-835623 Clientid:01:52:54:00:ed:af:33}
	I0408 18:13:59.727977  149099 main.go:141] libmachine: (addons-835623) DBG | domain addons-835623 has defined IP address 192.168.39.89 and MAC address 52:54:00:ed:af:33 in network mk-addons-835623
	I0408 18:13:59.728407  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHPort
	I0408 18:13:59.728674  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHKeyPath
	I0408 18:13:59.728853  149099 main.go:141] libmachine: (addons-835623) Calling .GetSSHUsername
	I0408 18:13:59.729010  149099 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/addons-835623/id_rsa Username:docker}
	I0408 18:14:00.062094  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:00.611894  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:00.680012  149099 pod_ready.go:103] pod "coredns-668d6bf9bc-xk7t6" in "kube-system" namespace has status "Ready":"False"
	I0408 18:14:00.709067  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.405537878s)
	I0408 18:14:00.709130  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:00.709145  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:00.709148  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.373415774s)
	I0408 18:14:00.709191  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:00.709207  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:00.709265  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.309741441s)
	I0408 18:14:00.709316  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:00.709331  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:00.709346  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.198470121s)
	I0408 18:14:00.709373  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:00.709382  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:00.709416  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:14:00.709453  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:00.709461  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:00.709469  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:00.709476  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:00.709493  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:00.709505  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:00.709514  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:00.709522  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:00.709609  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:14:00.709645  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:00.709655  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:00.709657  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:00.709663  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:00.709667  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:00.709672  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:00.709675  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:00.709680  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:00.709795  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.949949132s)
	I0408 18:14:00.709826  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:00.709853  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:00.710017  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.317749769s)
	W0408 18:14:00.710043  149099 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0408 18:14:00.710064  149099 retry.go:31] will retry after 245.75521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0408 18:14:00.710116  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:14:00.710134  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:14:00.710140  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:00.710152  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:00.710158  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:00.710164  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:00.710172  149099 addons.go:479] Verifying addon metrics-server=true in "addons-835623"
	I0408 18:14:00.711716  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:00.711736  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:00.711750  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:00.711719  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:14:00.711721  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:14:00.711752  149099 addons.go:479] Verifying addon ingress=true in "addons-835623"
	I0408 18:14:00.711761  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:00.712162  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:00.712172  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:00.712181  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:00.712187  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:00.714131  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:14:00.714180  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:00.714193  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:00.714935  149099 out.go:177] * Verifying ingress addon...
	I0408 18:14:00.715893  149099 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-835623 service yakd-dashboard -n yakd-dashboard
	
	I0408 18:14:00.717447  149099 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0408 18:14:00.731559  149099 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0408 18:14:00.731585  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:00.781235  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:00.781256  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:00.781590  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:00.781612  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:00.781628  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:14:00.956066  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 18:14:01.060014  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:01.232802  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:01.560459  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:01.763894  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:02.026329  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.222405233s)
	I0408 18:14:02.026404  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:02.026426  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:02.026420  149099 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.302154061s)
	I0408 18:14:02.026761  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:02.026781  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:02.026796  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:02.026799  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:14:02.026804  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:02.027227  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:02.027243  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:02.027257  149099 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-835623"
	I0408 18:14:02.028798  149099 out.go:177] * Verifying csi-hostpath-driver addon...
	I0408 18:14:02.028884  149099 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0408 18:14:02.030858  149099 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0408 18:14:02.031474  149099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0408 18:14:02.032150  149099 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0408 18:14:02.032185  149099 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0408 18:14:02.070282  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:02.071021  149099 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0408 18:14:02.071040  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:02.095350  149099 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0408 18:14:02.095379  149099 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0408 18:14:02.169581  149099 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0408 18:14:02.169605  149099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0408 18:14:02.222050  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:02.224880  149099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0408 18:14:02.539298  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:02.552579  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:02.721340  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:02.743353  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.787213296s)
	I0408 18:14:02.743430  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:02.743449  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:02.743717  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:02.743735  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:02.743744  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:02.743752  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:02.744001  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:02.744031  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:03.038031  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:03.051897  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:03.116213  149099 pod_ready.go:103] pod "coredns-668d6bf9bc-xk7t6" in "kube-system" namespace has status "Ready":"False"
	I0408 18:14:03.222217  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:03.579851  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:03.602968  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:03.636377  149099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.411446981s)
	I0408 18:14:03.636453  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:03.636474  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:03.636839  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:03.636899  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:03.636920  149099 main.go:141] libmachine: Making call to close driver server
	I0408 18:14:03.636919  149099 main.go:141] libmachine: (addons-835623) DBG | Closing plugin on server side
	I0408 18:14:03.636929  149099 main.go:141] libmachine: (addons-835623) Calling .Close
	I0408 18:14:03.637193  149099 main.go:141] libmachine: Successfully made call to close driver server
	I0408 18:14:03.637224  149099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 18:14:03.638396  149099 addons.go:479] Verifying addon gcp-auth=true in "addons-835623"
	I0408 18:14:03.640160  149099 out.go:177] * Verifying gcp-auth addon...
	I0408 18:14:03.642594  149099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0408 18:14:03.707939  149099 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0408 18:14:03.707963  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:03.777073  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:04.035609  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:04.055057  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:04.146742  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:04.222747  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:04.535444  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:04.552220  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:04.646745  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:04.741965  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:05.035577  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:05.052229  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:05.115217  149099 pod_ready.go:98] pod "coredns-668d6bf9bc-xk7t6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-08 18:14:04 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-08 18:13:52 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-08 18:13:52 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-08 18:13:52 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-08 18:13:52 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.89 HostIPs:[{IP:192.168.39.
89}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2025-04-08 18:13:52 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-08 18:13:57 +0000 UTC,FinishedAt:2025-04-08 18:14:03 +0000 UTC,ContainerID:cri-o://120be1e8c16a8e9aac02bdb4f2adb63e01a88eab880ebad57a75ebc5beaa67f6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://120be1e8c16a8e9aac02bdb4f2adb63e01a88eab880ebad57a75ebc5beaa67f6 Started:0xc001a92f90 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002974620} {Name:kube-api-access-7tcmc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002974630}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0408 18:14:05.115264  149099 pod_ready.go:82] duration metric: took 6.506106573s for pod "coredns-668d6bf9bc-xk7t6" in "kube-system" namespace to be "Ready" ...
	E0408 18:14:05.115281  149099 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-xk7t6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-08 18:14:04 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-08 18:13:52 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-08 18:13:52 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-08 18:13:52 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-04-08 18:13:52 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.89 HostIPs:[{IP:192.168.39.89}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2025-04-08 18:13:52 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-04-08 18:13:57 +0000 UTC,FinishedAt:2025-04-08 18:14:03 +0000 UTC,ContainerID:cri-o://120be1e8c16a8e9aac02bdb4f2adb63e01a88eab880ebad57a75ebc5beaa67f6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://120be1e8c16a8e9aac02bdb4f2adb63e01a88eab880ebad57a75ebc5beaa67f6 Started:0xc001a92f90 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002974620} {Name:kube-api-access-7tcmc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc002974630}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0408 18:14:05.115293  149099 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-835623" in "kube-system" namespace to be "Ready" ...
	I0408 18:14:05.119661  149099 pod_ready.go:93] pod "etcd-addons-835623" in "kube-system" namespace has status "Ready":"True"
	I0408 18:14:05.119688  149099 pod_ready.go:82] duration metric: took 4.385168ms for pod "etcd-addons-835623" in "kube-system" namespace to be "Ready" ...
	I0408 18:14:05.119703  149099 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-835623" in "kube-system" namespace to be "Ready" ...
	I0408 18:14:05.126829  149099 pod_ready.go:93] pod "kube-apiserver-addons-835623" in "kube-system" namespace has status "Ready":"True"
	I0408 18:14:05.126852  149099 pod_ready.go:82] duration metric: took 7.14216ms for pod "kube-apiserver-addons-835623" in "kube-system" namespace to be "Ready" ...
	I0408 18:14:05.126864  149099 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-835623" in "kube-system" namespace to be "Ready" ...
	I0408 18:14:05.132056  149099 pod_ready.go:93] pod "kube-controller-manager-addons-835623" in "kube-system" namespace has status "Ready":"True"
	I0408 18:14:05.132090  149099 pod_ready.go:82] duration metric: took 5.218233ms for pod "kube-controller-manager-addons-835623" in "kube-system" namespace to be "Ready" ...
	I0408 18:14:05.132107  149099 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r4qmh" in "kube-system" namespace to be "Ready" ...
	I0408 18:14:05.139596  149099 pod_ready.go:93] pod "kube-proxy-r4qmh" in "kube-system" namespace has status "Ready":"True"
	I0408 18:14:05.139632  149099 pod_ready.go:82] duration metric: took 7.512336ms for pod "kube-proxy-r4qmh" in "kube-system" namespace to be "Ready" ...
	I0408 18:14:05.139650  149099 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-835623" in "kube-system" namespace to be "Ready" ...
	I0408 18:14:05.146246  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:05.221174  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:05.514081  149099 pod_ready.go:93] pod "kube-scheduler-addons-835623" in "kube-system" namespace has status "Ready":"True"
	I0408 18:14:05.514120  149099 pod_ready.go:82] duration metric: took 374.459825ms for pod "kube-scheduler-addons-835623" in "kube-system" namespace to be "Ready" ...
	I0408 18:14:05.514137  149099 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bz8hp" in "kube-system" namespace to be "Ready" ...
	I0408 18:14:05.536940  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:05.552755  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:05.645746  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:05.721296  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:06.035291  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:06.051938  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:06.146366  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:06.222536  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:06.536064  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:06.552394  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:06.646401  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:06.722337  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:07.035827  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:07.052540  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:07.146041  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:07.221212  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:07.520316  149099 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-bz8hp" in "kube-system" namespace has status "Ready":"False"
	I0408 18:14:07.537144  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:07.552399  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:07.646307  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:07.721409  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:08.300111  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:08.300279  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:08.300409  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:08.300490  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:08.535505  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:08.552226  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:08.646576  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:08.720518  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:09.035699  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:09.053014  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:09.145898  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:09.221604  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:09.523658  149099 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-bz8hp" in "kube-system" namespace has status "Ready":"False"
	I0408 18:14:09.538005  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:09.552921  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:09.646276  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:09.721344  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:10.035074  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:10.051971  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:10.145801  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:10.220730  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:10.536310  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:10.553146  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:10.645984  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:10.721124  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:11.035564  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:11.052055  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:11.145746  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:11.220768  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:11.524688  149099 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-bz8hp" in "kube-system" namespace has status "Ready":"False"
	I0408 18:14:11.548561  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:11.552273  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:11.646327  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:11.721596  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:12.036065  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:12.051758  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:12.146185  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:12.224310  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:12.537288  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:12.557242  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:12.645357  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:12.721342  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:13.035540  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:13.052557  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:13.146495  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:13.221143  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:13.536107  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:13.552805  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:13.645721  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:13.720985  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:14.020901  149099 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-bz8hp" in "kube-system" namespace has status "Ready":"False"
	I0408 18:14:14.034952  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:14.052864  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:14.145725  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:14.220861  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:14.537718  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:14.552566  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:14.646366  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:14.721136  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:15.035002  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:15.052913  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:15.146215  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:15.221174  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:15.538029  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:15.552151  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:15.646523  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:15.720452  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:16.034673  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:16.051906  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:16.145929  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:16.221008  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:16.520988  149099 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-bz8hp" in "kube-system" namespace has status "Ready":"False"
	I0408 18:14:16.536802  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:16.552638  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:16.645705  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:16.721011  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:17.036595  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:17.051921  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:17.145514  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:17.221627  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:17.536535  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:17.552316  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:17.646228  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:17.721206  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:18.035435  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:18.052953  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:18.146052  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:18.220941  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:18.536017  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:18.929424  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:18.929705  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:18.929914  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:19.020043  149099 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-bz8hp" in "kube-system" namespace has status "Ready":"False"
	I0408 18:14:19.035049  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:19.051977  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:19.145676  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:19.220345  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:19.535564  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:19.552390  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:19.646239  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:19.721268  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:20.035746  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:20.052456  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:20.147023  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:20.221432  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:20.536456  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:20.939720  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:20.939978  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:20.940226  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:21.034986  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:21.052707  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:21.145527  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:21.227682  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:21.522103  149099 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-bz8hp" in "kube-system" namespace has status "Ready":"False"
	I0408 18:14:21.536744  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:21.553298  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:21.647615  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:21.720990  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:22.035539  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:22.052308  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:22.146204  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:22.221392  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:22.774796  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:22.775006  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:22.775307  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:22.775564  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:23.036613  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:23.053631  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:23.146712  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:23.223376  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:23.537119  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:23.552111  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:23.645907  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:23.720519  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:24.019597  149099 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-bz8hp" in "kube-system" namespace has status "Ready":"True"
	I0408 18:14:24.019626  149099 pod_ready.go:82] duration metric: took 18.505480269s for pod "nvidia-device-plugin-daemonset-bz8hp" in "kube-system" namespace to be "Ready" ...
	I0408 18:14:24.019637  149099 pod_ready.go:39] duration metric: took 29.53618384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 18:14:24.019667  149099 api_server.go:52] waiting for apiserver process to appear ...
	I0408 18:14:24.019732  149099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 18:14:24.034929  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:24.051650  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:24.083733  149099 api_server.go:72] duration metric: took 31.680358586s to wait for apiserver process to appear ...
	I0408 18:14:24.083764  149099 api_server.go:88] waiting for apiserver healthz status ...
	I0408 18:14:24.083791  149099 api_server.go:253] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
	I0408 18:14:24.088301  149099 api_server.go:279] https://192.168.39.89:8443/healthz returned 200:
	ok
	I0408 18:14:24.089400  149099 api_server.go:141] control plane version: v1.32.2
	I0408 18:14:24.089427  149099 api_server.go:131] duration metric: took 5.656404ms to wait for apiserver health ...
	I0408 18:14:24.089437  149099 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 18:14:24.093887  149099 system_pods.go:59] 18 kube-system pods found
	I0408 18:14:24.093995  149099 system_pods.go:61] "amd-gpu-device-plugin-lhz8n" [d95d26e6-1cef-4203-8a87-7cbcbe99649f] Running
	I0408 18:14:24.094010  149099 system_pods.go:61] "coredns-668d6bf9bc-dbltp" [56d399ef-21b7-4d88-b5bc-6d03b81be2b1] Running
	I0408 18:14:24.094027  149099 system_pods.go:61] "csi-hostpath-attacher-0" [a71722cd-3be3-41de-9479-75374b22dcae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0408 18:14:24.094040  149099 system_pods.go:61] "csi-hostpath-resizer-0" [f6af5f20-ae63-42dd-b6b4-d17cfbd68d54] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0408 18:14:24.094052  149099 system_pods.go:61] "csi-hostpathplugin-k97lw" [6fc9b7f3-e419-4bea-b32b-5ede88e1b61f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0408 18:14:24.094063  149099 system_pods.go:61] "etcd-addons-835623" [2c569698-5d4c-49cd-94c2-65fc131146b3] Running
	I0408 18:14:24.094073  149099 system_pods.go:61] "kube-apiserver-addons-835623" [834793aa-1852-49e5-bc56-caf552cc522e] Running
	I0408 18:14:24.094079  149099 system_pods.go:61] "kube-controller-manager-addons-835623" [dacafc64-8333-4fd9-9f7e-a5e9024b35e4] Running
	I0408 18:14:24.094088  149099 system_pods.go:61] "kube-ingress-dns-minikube" [7784ac00-538c-4017-9cce-15823c8dc03c] Running
	I0408 18:14:24.094093  149099 system_pods.go:61] "kube-proxy-r4qmh" [e0034cf6-56ed-4380-b2f2-ecfc267c8fd1] Running
	I0408 18:14:24.094100  149099 system_pods.go:61] "kube-scheduler-addons-835623" [3e350b32-91b9-4b80-9098-2b332f1b543d] Running
	I0408 18:14:24.094111  149099 system_pods.go:61] "metrics-server-7fbb699795-hw9g7" [dbdceedc-2077-4048-80db-7cb73c65d3d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 18:14:24.094120  149099 system_pods.go:61] "nvidia-device-plugin-daemonset-bz8hp" [aa2741c5-c5d0-499e-b5fd-788420d37b9b] Running
	I0408 18:14:24.094131  149099 system_pods.go:61] "registry-6c88467877-bq7bj" [3695ddb5-a636-4113-96e4-dedadf9b27e0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0408 18:14:24.094142  149099 system_pods.go:61] "registry-proxy-rtchc" [642d32b3-1efb-4e55-8406-232124327998] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0408 18:14:24.094155  149099 system_pods.go:61] "snapshot-controller-68b874b76f-cvjsf" [0e1005c2-6ced-4c85-ba41-07ee7b5389ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 18:14:24.094167  149099 system_pods.go:61] "snapshot-controller-68b874b76f-rqw5c" [9124d159-9c86-438b-8e4a-fed6aad38850] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 18:14:24.094179  149099 system_pods.go:61] "storage-provisioner" [c3d3312d-8773-4a66-b323-49933b7c0d61] Running
	I0408 18:14:24.094194  149099 system_pods.go:74] duration metric: took 4.748184ms to wait for pod list to return data ...
	I0408 18:14:24.094209  149099 default_sa.go:34] waiting for default service account to be created ...
	I0408 18:14:24.096325  149099 default_sa.go:45] found service account: "default"
	I0408 18:14:24.096347  149099 default_sa.go:55] duration metric: took 2.128436ms for default service account to be created ...
	I0408 18:14:24.096357  149099 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 18:14:24.099780  149099 system_pods.go:86] 18 kube-system pods found
	I0408 18:14:24.099825  149099 system_pods.go:89] "amd-gpu-device-plugin-lhz8n" [d95d26e6-1cef-4203-8a87-7cbcbe99649f] Running
	I0408 18:14:24.099834  149099 system_pods.go:89] "coredns-668d6bf9bc-dbltp" [56d399ef-21b7-4d88-b5bc-6d03b81be2b1] Running
	I0408 18:14:24.099845  149099 system_pods.go:89] "csi-hostpath-attacher-0" [a71722cd-3be3-41de-9479-75374b22dcae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0408 18:14:24.099853  149099 system_pods.go:89] "csi-hostpath-resizer-0" [f6af5f20-ae63-42dd-b6b4-d17cfbd68d54] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0408 18:14:24.099864  149099 system_pods.go:89] "csi-hostpathplugin-k97lw" [6fc9b7f3-e419-4bea-b32b-5ede88e1b61f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0408 18:14:24.099870  149099 system_pods.go:89] "etcd-addons-835623" [2c569698-5d4c-49cd-94c2-65fc131146b3] Running
	I0408 18:14:24.099876  149099 system_pods.go:89] "kube-apiserver-addons-835623" [834793aa-1852-49e5-bc56-caf552cc522e] Running
	I0408 18:14:24.099887  149099 system_pods.go:89] "kube-controller-manager-addons-835623" [dacafc64-8333-4fd9-9f7e-a5e9024b35e4] Running
	I0408 18:14:24.099893  149099 system_pods.go:89] "kube-ingress-dns-minikube" [7784ac00-538c-4017-9cce-15823c8dc03c] Running
	I0408 18:14:24.099899  149099 system_pods.go:89] "kube-proxy-r4qmh" [e0034cf6-56ed-4380-b2f2-ecfc267c8fd1] Running
	I0408 18:14:24.099904  149099 system_pods.go:89] "kube-scheduler-addons-835623" [3e350b32-91b9-4b80-9098-2b332f1b543d] Running
	I0408 18:14:24.099915  149099 system_pods.go:89] "metrics-server-7fbb699795-hw9g7" [dbdceedc-2077-4048-80db-7cb73c65d3d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 18:14:24.099922  149099 system_pods.go:89] "nvidia-device-plugin-daemonset-bz8hp" [aa2741c5-c5d0-499e-b5fd-788420d37b9b] Running
	I0408 18:14:24.099930  149099 system_pods.go:89] "registry-6c88467877-bq7bj" [3695ddb5-a636-4113-96e4-dedadf9b27e0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0408 18:14:24.099941  149099 system_pods.go:89] "registry-proxy-rtchc" [642d32b3-1efb-4e55-8406-232124327998] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0408 18:14:24.099951  149099 system_pods.go:89] "snapshot-controller-68b874b76f-cvjsf" [0e1005c2-6ced-4c85-ba41-07ee7b5389ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 18:14:24.099959  149099 system_pods.go:89] "snapshot-controller-68b874b76f-rqw5c" [9124d159-9c86-438b-8e4a-fed6aad38850] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 18:14:24.099968  149099 system_pods.go:89] "storage-provisioner" [c3d3312d-8773-4a66-b323-49933b7c0d61] Running
	I0408 18:14:24.099979  149099 system_pods.go:126] duration metric: took 3.615634ms to wait for k8s-apps to be running ...
	I0408 18:14:24.099994  149099 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 18:14:24.100052  149099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:14:24.128956  149099 system_svc.go:56] duration metric: took 28.949574ms WaitForService to wait for kubelet
	I0408 18:14:24.128991  149099 kubeadm.go:582] duration metric: took 31.725626234s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 18:14:24.129011  149099 node_conditions.go:102] verifying NodePressure condition ...
	I0408 18:14:24.131894  149099 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 18:14:24.131926  149099 node_conditions.go:123] node cpu capacity is 2
	I0408 18:14:24.131946  149099 node_conditions.go:105] duration metric: took 2.929447ms to run NodePressure ...
	I0408 18:14:24.131962  149099 start.go:241] waiting for startup goroutines ...
	I0408 18:14:24.145516  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:24.220175  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:24.536279  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:24.552270  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:24.646890  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:24.720593  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:25.035348  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:25.051959  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:25.146220  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:25.221616  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:25.541891  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:25.552701  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:25.646926  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:25.721143  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:26.036479  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:26.059811  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:26.146465  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:26.222552  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:26.535353  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:26.552951  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:26.656042  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:26.754030  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:27.036349  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:27.052534  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:27.146951  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:27.221304  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:27.536986  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:27.551582  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:27.646333  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:27.721482  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:28.036488  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:28.054439  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:28.146900  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:28.221104  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:28.536553  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:28.552372  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:28.671761  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:28.777122  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:29.036253  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:29.052114  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:29.145926  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:29.221231  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:29.535249  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:29.552378  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:29.646947  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:29.721103  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:30.035549  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:30.052617  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:30.146328  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:30.221680  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:30.535336  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:30.551974  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:30.646307  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:30.721579  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:31.036492  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:31.052499  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:31.146546  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:31.220460  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:31.536958  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:31.551840  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:31.645878  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:31.742850  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:32.036060  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:32.053281  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:32.146769  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:32.221619  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:32.537179  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:32.552283  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:32.648466  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:32.721456  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:33.160237  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:33.160294  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:33.160968  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:33.221798  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:33.534933  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:33.554364  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:33.645527  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:33.721533  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:34.035534  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:34.052435  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:34.146485  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:34.221511  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:34.536773  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:34.552425  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:34.646452  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:34.724365  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:35.036231  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:35.051979  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:35.146453  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:35.222443  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:35.534835  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:35.553022  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:35.646671  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:35.747089  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:36.035640  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:36.052851  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:36.145847  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:36.221019  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:36.535854  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:36.552719  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:36.646647  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:36.720890  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:37.035219  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:37.052009  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:37.145683  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:37.511501  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:37.535792  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:37.552326  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:37.646034  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:37.720688  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:38.034516  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:38.052415  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:38.146173  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:38.222691  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:38.538230  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:38.553507  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 18:14:38.646762  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:38.721800  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:39.036019  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:39.051640  149099 kapi.go:107] duration metric: took 40.502939491s to wait for kubernetes.io/minikube-addons=registry ...
	I0408 18:14:39.146748  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:39.221013  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:39.535513  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:39.645723  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:39.720912  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:40.036116  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:40.146460  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:40.221276  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:40.535749  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:40.646703  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:40.721585  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:41.035996  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:41.146013  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:41.222022  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:41.538098  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:41.646460  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:41.721489  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:42.035925  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:42.146330  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:42.223723  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:42.534779  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:42.646380  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:42.721979  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:43.035094  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:43.145504  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:43.221859  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:43.534968  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:43.645829  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:43.721152  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:44.035425  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:44.146373  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:44.221166  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:44.535250  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:44.647489  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:44.721370  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:45.035199  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:45.146104  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:45.225906  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:45.535767  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:45.646742  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:45.720557  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:46.034886  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:46.145684  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:46.220728  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:46.540635  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:46.646222  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:46.721752  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:47.035256  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:47.146023  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:47.220744  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:47.536125  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:47.646215  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:47.721729  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:48.035992  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:48.168092  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:48.265671  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:48.537451  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:48.645934  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:48.720929  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:49.035248  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:49.145766  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:49.220476  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:49.556667  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:49.655983  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:49.721089  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:50.036415  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:50.146372  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:50.221935  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:50.535814  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:50.645410  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:50.727504  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:51.035127  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:51.146999  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:51.221258  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:51.535290  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:51.646707  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:51.720662  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:52.034264  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:52.145907  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:52.220910  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:52.535497  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:52.646142  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:52.721399  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:53.035683  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:53.146360  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:53.745048  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:53.745861  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:53.746072  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:53.842511  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:54.035689  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:54.145758  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:54.220644  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:54.536316  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:54.646257  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:54.733379  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:55.039405  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:55.146966  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:55.224513  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:55.545187  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:55.647541  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:55.721284  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:56.037343  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:56.147189  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:56.221783  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:56.538446  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:56.647384  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:56.725477  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:57.035843  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:57.146047  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:57.222007  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:57.534947  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:57.646408  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:57.721280  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:58.036490  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:58.146867  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:58.220936  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:58.542404  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:58.646893  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:58.721134  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:59.036102  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:59.146591  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:59.221521  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:14:59.535313  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:14:59.646208  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:14:59.722010  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:00.043856  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:00.147320  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:00.251544  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:00.536819  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:00.645474  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:00.721481  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:01.039986  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:01.145809  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:01.221187  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:01.536306  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:01.646404  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:01.722484  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:02.035079  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:02.146138  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:02.221898  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:02.535353  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:02.646019  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:02.721512  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:03.034763  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:03.146680  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:03.220592  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:03.536108  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:03.647752  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:03.721036  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:04.042016  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:04.161775  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:04.222269  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:04.546804  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:04.649316  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:04.762577  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:05.036256  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:05.147362  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:05.221808  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:05.535438  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:05.646724  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:05.720960  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:06.036074  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:06.146472  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:06.220600  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:06.539079  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:06.646735  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:06.721146  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:07.036393  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:07.147207  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:07.223401  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:07.535376  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:07.646311  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:07.721669  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:08.036476  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:08.146214  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:08.221365  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:08.536917  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:08.647675  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:08.720785  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:09.036046  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:09.147547  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:09.221371  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:09.536988  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:09.646091  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:09.721511  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:10.279500  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:10.280276  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:10.280416  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:10.537494  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:10.646301  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:10.721905  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:11.036527  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:11.146261  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:11.221319  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:11.536740  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:11.645245  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:11.721423  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:12.035365  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:12.146411  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:12.221461  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:12.598738  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:12.646175  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:12.787638  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:13.034431  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:13.146297  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:13.221263  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:13.536038  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:13.650053  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:13.750106  149099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 18:15:14.036519  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:14.147406  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:14.225232  149099 kapi.go:107] duration metric: took 1m13.507775515s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0408 18:15:14.536701  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:14.647861  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:15.035886  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:15.147464  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:15.539135  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 18:15:15.647757  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:16.035680  149099 kapi.go:107] duration metric: took 1m14.004199147s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0408 18:15:16.146952  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:16.647611  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:17.146332  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:17.646743  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:18.146099  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:18.647604  149099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 18:15:19.146929  149099 kapi.go:107] duration metric: took 1m15.504332698s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0408 18:15:19.148720  149099 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-835623 cluster.
	I0408 18:15:19.150011  149099 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0408 18:15:19.151441  149099 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0408 18:15:19.153069  149099 out.go:177] * Enabled addons: ingress-dns, amd-gpu-device-plugin, default-storageclass, inspektor-gadget, storage-provisioner, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0408 18:15:19.154459  149099 addons.go:514] duration metric: took 1m26.751057261s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin default-storageclass inspektor-gadget storage-provisioner nvidia-device-plugin cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0408 18:15:19.154526  149099 start.go:246] waiting for cluster config update ...
	I0408 18:15:19.154562  149099 start.go:255] writing updated cluster config ...
	I0408 18:15:19.154864  149099 ssh_runner.go:195] Run: rm -f paused
	I0408 18:15:19.212809  149099 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0408 18:15:19.214772  149099 out.go:177] * Done! kubectl is now configured to use "addons-835623" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.113094519Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kicbase/echo-server:1.0\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.114102750Z" level=debug msg="Using registries.d directory /etc/containers/registries.d" file="docker/registries_d.go:80"
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.114200056Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\"" file="docker/docker_image_src.go:87"
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.114243016Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /run/containers/0/auth.json" file="config/config.go:846"
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.114276342Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.config/containers/auth.json" file="config/config.go:846"
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.114301543Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.docker/config.json" file="config/config.go:846"
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.114329308Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.dockercfg" file="config/config.go:846"
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.114351195Z" level=debug msg="No credentials for docker.io/kicbase/echo-server found" file="config/config.go:272"
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.114379478Z" level=debug msg=" No signature storage configuration found for docker.io/kicbase/echo-server:1.0, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.114423276Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.114499293Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.140354241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=776e3219-308b-4eeb-85fc-4117a370c525 name=/runtime.v1.RuntimeService/Version
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.140498981Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=776e3219-308b-4eeb-85fc-4117a370c525 name=/runtime.v1.RuntimeService/Version
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.142904154Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=725586b5-1180-498a-a1fb-9a4cccdc748f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.144235177Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744136310144203393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=725586b5-1180-498a-a1fb-9a4cccdc748f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.144914106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c8dac6a-8e80-4a2f-951c-02336fe27817 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.144989604Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c8dac6a-8e80-4a2f-951c-02336fe27817 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.146253882Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:705faad9a0d17b09051386d693882aabc26212c77dbd2bfe4787ef74773a4751,PodSandboxId:bbd410126eb42f302045a423fdbe1456c44aa4fa38b4eae98753956d11723ef5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744136170368144731,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d98d15-fee7-4d69-9366-18b8df6b682f,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcbef5437701d36e3dc23c9e8347e325a54317406a529c4ffb5f7895c31ae32,PodSandboxId:9d721f735cb2d39590332cbc0c158e2a994ffe485901d9b3098a2c50ed9abd3c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744136123802804560,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: afbcf261-6a00-410f-8bfc-9d762c6d3c14,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec32f75c4ebd79001ed0d9f2193b7c187c320ce33e7b97a1f107cb8f560b513c,PodSandboxId:c00937bfdf7534f75ee4204578e764719cff53d5d0729b08bba7c0caf44b22c2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744136113547764401,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-bn2fn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae078785-eacd-4f49-abd1-e1f4322d08dc,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2d0c3b6e338ca0ed8f66027ccd98463d99c05ded85c598104a97ae7f8f202043,PodSandboxId:2d0ae67b7fb6e0dee4819b3c5be07af3fb2e0863b22d7dcde46b10610c274877,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744136099969788437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k5wmv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fc2930e8-e05c-440e-889d-c63120f22ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92ac85834f21737938facc026f5a2a099dbdbb702af59f5514de87a8c760a28,PodSandboxId:e1e442191c9004e81c976e5663309e935f48528e5f62f55b3413a0322f7bab6b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744136098460436036,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pbmcw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5c3d0f5c-6a53-47e7-81a5-6a2ccd32ccf3,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953cd9a25d4d03a48780c3c5ef3a25b30aae9636dc1a18ee89ce53b5f2fbf21c,PodSandboxId:c9ec7664d26b2abacc45b4d6c04cbc640e401fa23d9e095f709355e0bfd8a46a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744136051233829812,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lhz8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d95d26e6-1cef-4203-8a87-7cbcbe99649f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce1eb83135e5369b628533506dc792727c78ab8a2e60019d3ba69c9278cbf5e,PodSandboxId:e1001fdf07010c297cce27d412f7a16085004afbdc63c507d1ffd868d1cc3a62,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744136048737699449,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7784ac00-538c-4017-9cce-15823c8dc03c,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92203465e80c98f584410e953f6f3853d557eb0bad092a645bb1d543e876c196,PodSandboxId:6875815c95fa18d7b19e19bac1e744330e910672107b5729a525d7c02fcaf01a,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744136038776296368,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d3312d-8773-4a66-b323-49933b7c0d61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104e6c8728f6475f128a925f7313cf42ecf443e9e625efa2dbe15b2a1ea7ed6b,PodSandboxId:edfbaa2a4c3ad2cf5760554082b6ac955acf8e674ba900cbd284b6b766f7f67e,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744136036545109013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-dbltp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d399ef-21b7-4d88-b5bc-6d03b81be2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e66e5b8003279976786346c98db439a7d1286929a3c8f204e1f98324e8e0e60f,PodSandboxId:4dba3daaf3ed30ee42280b4d743b5928af2531016f80a4eb07363a92c20ea47b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744136034049666521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4qmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0034cf6-56ed-4380-b2f2-ecfc267c8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39d2dfc35152466c82799d9a0798
146e2973e8301d7fbef59d57f1f49275b0f,PodSandboxId:fb1170165bee097ac6330e32acd4e1068ab6a0b8b2d55005088dc83e5e58d277,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744136022437613513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-835623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8f3950d00ae7ab765928dee35f1ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcf4e912fd122f5d6d
51cb9b8ab7a226a7bbf90a59b598c490d034375c90f47,PodSandboxId:1141ae680b640357e3d3984c9f459c637d0b23cda73f9294462a35c8f38c86cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744136022402926381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-835623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 311324e2e706b54591a0ed86c832fb52,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214dc9efc27f1d58b399a684c4f6c4fd471e71750dee77032281d9601cfe5843,P
odSandboxId:f34391426ad185f0f3ae2d38ad2f7b8d2ee1233ecf385ac7111aebec98c1e12a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744136022369852844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-835623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3e011487ce553fd22049eba18ad297,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1526c44a6021d0c536b3b609eeec0fc22c893fa034bba4d87baa302315ce0837,PodSandboxId:568d7
60a61294de08310f83e127130fe8a09801ba4dbc58465908fb1ef7c57e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744136022333329323,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-835623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05f8a102e74c6257b46c0c6e728f0ff9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c8dac6a-8e80-4a2f-951c-02336fe27817 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.187079248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbcc94fc-cb0a-4c4a-afa8-d094506666d6 name=/runtime.v1.RuntimeService/Version
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.187171336Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbcc94fc-cb0a-4c4a-afa8-d094506666d6 name=/runtime.v1.RuntimeService/Version
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.188505621Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cca0ece8-21cf-4ab9-8cf6-d20dd4426cc5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.189659179Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744136310189630408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cca0ece8-21cf-4ab9-8cf6-d20dd4426cc5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.190561409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fe1a046-2524-4f50-8421-ce91461bd65a name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.190643835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fe1a046-2524-4f50-8421-ce91461bd65a name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 18:18:30 addons-835623 crio[661]: time="2025-04-08 18:18:30.191168169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:705faad9a0d17b09051386d693882aabc26212c77dbd2bfe4787ef74773a4751,PodSandboxId:bbd410126eb42f302045a423fdbe1456c44aa4fa38b4eae98753956d11723ef5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744136170368144731,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d98d15-fee7-4d69-9366-18b8df6b682f,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bcbef5437701d36e3dc23c9e8347e325a54317406a529c4ffb5f7895c31ae32,PodSandboxId:9d721f735cb2d39590332cbc0c158e2a994ffe485901d9b3098a2c50ed9abd3c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744136123802804560,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: afbcf261-6a00-410f-8bfc-9d762c6d3c14,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec32f75c4ebd79001ed0d9f2193b7c187c320ce33e7b97a1f107cb8f560b513c,PodSandboxId:c00937bfdf7534f75ee4204578e764719cff53d5d0729b08bba7c0caf44b22c2,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744136113547764401,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-bn2fn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae078785-eacd-4f49-abd1-e1f4322d08dc,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2d0c3b6e338ca0ed8f66027ccd98463d99c05ded85c598104a97ae7f8f202043,PodSandboxId:2d0ae67b7fb6e0dee4819b3c5be07af3fb2e0863b22d7dcde46b10610c274877,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744136099969788437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k5wmv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fc2930e8-e05c-440e-889d-c63120f22ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92ac85834f21737938facc026f5a2a099dbdbb702af59f5514de87a8c760a28,PodSandboxId:e1e442191c9004e81c976e5663309e935f48528e5f62f55b3413a0322f7bab6b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744136098460436036,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pbmcw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5c3d0f5c-6a53-47e7-81a5-6a2ccd32ccf3,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953cd9a25d4d03a48780c3c5ef3a25b30aae9636dc1a18ee89ce53b5f2fbf21c,PodSandboxId:c9ec7664d26b2abacc45b4d6c04cbc640e401fa23d9e095f709355e0bfd8a46a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744136051233829812,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lhz8n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d95d26e6-1cef-4203-8a87-7cbcbe99649f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce1eb83135e5369b628533506dc792727c78ab8a2e60019d3ba69c9278cbf5e,PodSandboxId:e1001fdf07010c297cce27d412f7a16085004afbdc63c507d1ffd868d1cc3a62,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744136048737699449,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7784ac00-538c-4017-9cce-15823c8dc03c,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92203465e80c98f584410e953f6f3853d557eb0bad092a645bb1d543e876c196,PodSandboxId:6875815c95fa18d7b19e19bac1e744330e910672107b5729a525d7c02fcaf01a,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744136038776296368,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d3312d-8773-4a66-b323-49933b7c0d61,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:104e6c8728f6475f128a925f7313cf42ecf443e9e625efa2dbe15b2a1ea7ed6b,PodSandboxId:edfbaa2a4c3ad2cf5760554082b6ac955acf8e674ba900cbd284b6b766f7f67e,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744136036545109013,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-dbltp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d399ef-21b7-4d88-b5bc-6d03b81be2b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e66e5b8003279976786346c98db439a7d1286929a3c8f204e1f98324e8e0e60f,PodSandboxId:4dba3daaf3ed30ee42280b4d743b5928af2531016f80a4eb07363a92c20ea47b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744136034049666521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4qmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0034cf6-56ed-4380-b2f2-ecfc267c8fd1,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d39d2dfc35152466c82799d9a0798
146e2973e8301d7fbef59d57f1f49275b0f,PodSandboxId:fb1170165bee097ac6330e32acd4e1068ab6a0b8b2d55005088dc83e5e58d277,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744136022437613513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-835623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d8f3950d00ae7ab765928dee35f1ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcf4e912fd122f5d6d
51cb9b8ab7a226a7bbf90a59b598c490d034375c90f47,PodSandboxId:1141ae680b640357e3d3984c9f459c637d0b23cda73f9294462a35c8f38c86cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744136022402926381,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-835623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 311324e2e706b54591a0ed86c832fb52,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214dc9efc27f1d58b399a684c4f6c4fd471e71750dee77032281d9601cfe5843,P
odSandboxId:f34391426ad185f0f3ae2d38ad2f7b8d2ee1233ecf385ac7111aebec98c1e12a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744136022369852844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-835623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d3e011487ce553fd22049eba18ad297,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1526c44a6021d0c536b3b609eeec0fc22c893fa034bba4d87baa302315ce0837,PodSandboxId:568d7
60a61294de08310f83e127130fe8a09801ba4dbc58465908fb1ef7c57e1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744136022333329323,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-835623,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05f8a102e74c6257b46c0c6e728f0ff9,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fe1a046-2524-4f50-8421-ce91461bd65a name=/runtime.v1.RuntimeServ
ice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	705faad9a0d17       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   bbd410126eb42       nginx
	6bcbef5437701       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   9d721f735cb2d       busybox
	ec32f75c4ebd7       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   c00937bfdf753       ingress-nginx-controller-56d7c84fd4-bn2fn
	2d0c3b6e338ca       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   2d0ae67b7fb6e       ingress-nginx-admission-patch-k5wmv
	d92ac85834f21       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   e1e442191c900       ingress-nginx-admission-create-pbmcw
	953cd9a25d4d0       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   c9ec7664d26b2       amd-gpu-device-plugin-lhz8n
	0ce1eb83135e5       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   e1001fdf07010       kube-ingress-dns-minikube
	92203465e80c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   6875815c95fa1       storage-provisioner
	104e6c8728f64       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   edfbaa2a4c3ad       coredns-668d6bf9bc-dbltp
	e66e5b8003279       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             4 minutes ago       Running             kube-proxy                0                   4dba3daaf3ed3       kube-proxy-r4qmh
	d39d2dfc35152       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             4 minutes ago       Running             kube-controller-manager   0                   fb1170165bee0       kube-controller-manager-addons-835623
	7dcf4e912fd12       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   1141ae680b640       etcd-addons-835623
	214dc9efc27f1       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             4 minutes ago       Running             kube-apiserver            0                   f34391426ad18       kube-apiserver-addons-835623
	1526c44a6021d       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             4 minutes ago       Running             kube-scheduler            0                   568d760a61294       kube-scheduler-addons-835623
	
	
	==> coredns [104e6c8728f6475f128a925f7313cf42ecf443e9e625efa2dbe15b2a1ea7ed6b] <==
	[INFO] 10.244.0.8:54551 - 51458 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000145492s
	[INFO] 10.244.0.8:54551 - 55794 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000137255s
	[INFO] 10.244.0.8:54551 - 44044 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00009261s
	[INFO] 10.244.0.8:54551 - 43412 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000087056s
	[INFO] 10.244.0.8:54551 - 38615 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000069335s
	[INFO] 10.244.0.8:54551 - 48872 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000121019s
	[INFO] 10.244.0.8:54551 - 61235 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000301451s
	[INFO] 10.244.0.8:36053 - 25540 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000411328s
	[INFO] 10.244.0.8:36053 - 25277 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000405256s
	[INFO] 10.244.0.8:49816 - 9262 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114577s
	[INFO] 10.244.0.8:49816 - 9495 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000255467s
	[INFO] 10.244.0.8:58906 - 23195 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105302s
	[INFO] 10.244.0.8:58906 - 23405 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081861s
	[INFO] 10.244.0.8:39961 - 57542 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000222223s
	[INFO] 10.244.0.8:39961 - 57331 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000353879s
	[INFO] 10.244.0.23:47114 - 27988 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000302343s
	[INFO] 10.244.0.23:54862 - 14736 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000328613s
	[INFO] 10.244.0.23:33065 - 25808 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149238s
	[INFO] 10.244.0.23:57140 - 2720 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124017s
	[INFO] 10.244.0.23:49148 - 15749 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118791s
	[INFO] 10.244.0.23:41157 - 30436 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109132s
	[INFO] 10.244.0.23:49419 - 24205 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001141475s
	[INFO] 10.244.0.23:58054 - 58622 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000878395s
	[INFO] 10.244.0.28:52870 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0003615s
	[INFO] 10.244.0.28:36875 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000195776s
	
	
	==> describe nodes <==
	Name:               addons-835623
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-835623
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=00fec7ad00298ce3ccd71a2d57a7f829f082fec8
	                    minikube.k8s.io/name=addons-835623
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_08T18_13_48_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-835623
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Apr 2025 18:13:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-835623
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Apr 2025 18:18:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Apr 2025 18:16:31 +0000   Tue, 08 Apr 2025 18:13:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Apr 2025 18:16:31 +0000   Tue, 08 Apr 2025 18:13:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Apr 2025 18:16:31 +0000   Tue, 08 Apr 2025 18:13:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Apr 2025 18:16:31 +0000   Tue, 08 Apr 2025 18:13:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    addons-835623
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c898c2dd20724ddaa072381d001f9c2a
	  System UUID:                c898c2dd-2072-4dda-a072-381d001f9c2a
	  Boot ID:                    8cd3c05d-d638-45eb-b10f-3eadfeb854d9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  default                     hello-world-app-7d9564db4-rd45f              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-bn2fn    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m30s
	  kube-system                 amd-gpu-device-plugin-lhz8n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 coredns-668d6bf9bc-dbltp                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m38s
	  kube-system                 etcd-addons-835623                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m43s
	  kube-system                 kube-apiserver-addons-835623                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-controller-manager-addons-835623        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-proxy-r4qmh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-scheduler-addons-835623                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m35s                  kube-proxy       
	  Normal  Starting                 4m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m49s (x4 over 4m49s)  kubelet          Node addons-835623 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s (x4 over 4m49s)  kubelet          Node addons-835623 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s (x3 over 4m49s)  kubelet          Node addons-835623 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m43s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m43s                  kubelet          Node addons-835623 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s                  kubelet          Node addons-835623 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s                  kubelet          Node addons-835623 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m42s                  kubelet          Node addons-835623 status is now: NodeReady
	  Normal  RegisteredNode           4m39s                  node-controller  Node addons-835623 event: Registered Node addons-835623 in Controller
	
	
	==> dmesg <==
	[  +0.060392] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.994051] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[  +0.103381] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.213832] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.155260] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.040187] kauditd_printk_skb: 124 callbacks suppressed
	[Apr 8 18:14] kauditd_printk_skb: 109 callbacks suppressed
	[  +6.002575] kauditd_printk_skb: 98 callbacks suppressed
	[ +19.884756] kauditd_printk_skb: 9 callbacks suppressed
	[ +12.191890] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.526960] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.495931] kauditd_printk_skb: 34 callbacks suppressed
	[Apr 8 18:15] kauditd_printk_skb: 64 callbacks suppressed
	[  +8.702206] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.408562] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.990913] kauditd_printk_skb: 9 callbacks suppressed
	[ +13.425132] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.025862] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.009929] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.080299] kauditd_printk_skb: 49 callbacks suppressed
	[Apr 8 18:16] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.501071] kauditd_printk_skb: 30 callbacks suppressed
	[  +9.289460] kauditd_printk_skb: 22 callbacks suppressed
	[ +10.927888] kauditd_printk_skb: 18 callbacks suppressed
	[Apr 8 18:18] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [7dcf4e912fd122f5d6d51cb9b8ab7a226a7bbf90a59b598c490d034375c90f47] <==
	{"level":"info","ts":"2025-04-08T18:15:10.263397Z","caller":"traceutil/trace.go:171","msg":"trace[1540868801] linearizableReadLoop","detail":"{readStateIndex:1135; appliedIndex:1135; }","duration":"239.449218ms","start":"2025-04-08T18:15:10.023933Z","end":"2025-04-08T18:15:10.263382Z","steps":["trace[1540868801] 'read index received'  (duration: 239.443559ms)","trace[1540868801] 'applied index is now lower than readState.Index'  (duration: 4.606µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-08T18:15:10.263609Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.646467ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-08T18:15:10.266045Z","caller":"traceutil/trace.go:171","msg":"trace[1240182221] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1101; }","duration":"242.125778ms","start":"2025-04-08T18:15:10.023908Z","end":"2025-04-08T18:15:10.266033Z","steps":["trace[1240182221] 'agreement among raft nodes before linearized reading'  (duration: 239.626621ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-08T18:15:10.264980Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.327916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-08T18:15:10.266280Z","caller":"traceutil/trace.go:171","msg":"trace[1922963354] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"130.656393ms","start":"2025-04-08T18:15:10.135614Z","end":"2025-04-08T18:15:10.266271Z","steps":["trace[1922963354] 'agreement among raft nodes before linearized reading'  (duration: 129.322468ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T18:15:10.265012Z","caller":"traceutil/trace.go:171","msg":"trace[1156229590] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"239.017168ms","start":"2025-04-08T18:15:10.025988Z","end":"2025-04-08T18:15:10.265005Z","steps":["trace[1156229590] 'process raft request'  (duration: 238.884529ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T18:15:12.586342Z","caller":"traceutil/trace.go:171","msg":"trace[1119379605] transaction","detail":"{read_only:false; response_revision:1103; number_of_response:1; }","duration":"310.649124ms","start":"2025-04-08T18:15:12.275678Z","end":"2025-04-08T18:15:12.586327Z","steps":["trace[1119379605] 'process raft request'  (duration: 310.20674ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-08T18:15:12.586429Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-08T18:15:12.275650Z","time spent":"310.731163ms","remote":"127.0.0.1:55382","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1101 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-04-08T18:15:47.326973Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.683984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-08T18:15:47.327053Z","caller":"traceutil/trace.go:171","msg":"trace[1366641483] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1327; }","duration":"269.790166ms","start":"2025-04-08T18:15:47.057252Z","end":"2025-04-08T18:15:47.327042Z","steps":["trace[1366641483] 'range keys from in-memory index tree'  (duration: 269.609788ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-08T18:15:47.328122Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.413493ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-08T18:15:47.328176Z","caller":"traceutil/trace.go:171","msg":"trace[1513402390] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:0; response_revision:1327; }","duration":"206.550453ms","start":"2025-04-08T18:15:47.121615Z","end":"2025-04-08T18:15:47.328166Z","steps":["trace[1513402390] 'count revisions from in-memory index tree'  (duration: 206.316454ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T18:16:08.314243Z","caller":"traceutil/trace.go:171","msg":"trace[861098806] transaction","detail":"{read_only:false; response_revision:1542; number_of_response:1; }","duration":"258.661314ms","start":"2025-04-08T18:16:08.055567Z","end":"2025-04-08T18:16:08.314228Z","steps":["trace[861098806] 'process raft request'  (duration: 258.548674ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T18:16:08.314769Z","caller":"traceutil/trace.go:171","msg":"trace[702418009] linearizableReadLoop","detail":"{readStateIndex:1599; appliedIndex:1599; }","duration":"217.697321ms","start":"2025-04-08T18:16:08.097026Z","end":"2025-04-08T18:16:08.314724Z","steps":["trace[702418009] 'read index received'  (duration: 217.694559ms)","trace[702418009] 'applied index is now lower than readState.Index'  (duration: 2.206µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-08T18:16:08.314882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.840443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-08T18:16:08.314916Z","caller":"traceutil/trace.go:171","msg":"trace[1868255044] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1542; }","duration":"217.925782ms","start":"2025-04-08T18:16:08.096984Z","end":"2025-04-08T18:16:08.314910Z","steps":["trace[1868255044] 'agreement among raft nodes before linearized reading'  (duration: 217.846303ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T18:16:17.363271Z","caller":"traceutil/trace.go:171","msg":"trace[1069190394] linearizableReadLoop","detail":"{readStateIndex:1666; appliedIndex:1665; }","duration":"134.972055ms","start":"2025-04-08T18:16:17.228285Z","end":"2025-04-08T18:16:17.363257Z","steps":["trace[1069190394] 'read index received'  (duration: 134.850685ms)","trace[1069190394] 'applied index is now lower than readState.Index'  (duration: 120.7µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-08T18:16:17.363532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.228226ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-08T18:16:17.363576Z","caller":"traceutil/trace.go:171","msg":"trace[622888259] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1606; }","duration":"135.286053ms","start":"2025-04-08T18:16:17.228280Z","end":"2025-04-08T18:16:17.363566Z","steps":["trace[622888259] 'agreement among raft nodes before linearized reading'  (duration: 135.050072ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T18:16:17.363705Z","caller":"traceutil/trace.go:171","msg":"trace[241168300] transaction","detail":"{read_only:false; response_revision:1606; number_of_response:1; }","duration":"384.207951ms","start":"2025-04-08T18:16:16.979481Z","end":"2025-04-08T18:16:17.363689Z","steps":["trace[241168300] 'process raft request'  (duration: 383.674601ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-08T18:16:17.363808Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-08T18:16:16.979418Z","time spent":"384.340353ms","remote":"127.0.0.1:55382","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1601 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-04-08T18:16:49.186392Z","caller":"traceutil/trace.go:171","msg":"trace[895190072] linearizableReadLoop","detail":"{readStateIndex:1882; appliedIndex:1881; }","duration":"129.313171ms","start":"2025-04-08T18:16:49.057066Z","end":"2025-04-08T18:16:49.186379Z","steps":["trace[895190072] 'read index received'  (duration: 129.186375ms)","trace[895190072] 'applied index is now lower than readState.Index'  (duration: 126.443µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-08T18:16:49.186666Z","caller":"traceutil/trace.go:171","msg":"trace[642217690] transaction","detail":"{read_only:false; response_revision:1811; number_of_response:1; }","duration":"192.481764ms","start":"2025-04-08T18:16:48.994173Z","end":"2025-04-08T18:16:49.186655Z","steps":["trace[642217690] 'process raft request'  (duration: 192.123964ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-08T18:16:49.186870Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.78966ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-08T18:16:49.186894Z","caller":"traceutil/trace.go:171","msg":"trace[742459423] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:1811; }","duration":"129.855263ms","start":"2025-04-08T18:16:49.057033Z","end":"2025-04-08T18:16:49.186888Z","steps":["trace[742459423] 'agreement among raft nodes before linearized reading'  (duration: 129.799213ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:18:30 up 5 min,  0 users,  load average: 0.48, 1.10, 0.59
	Linux addons-835623 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [214dc9efc27f1d58b399a684c4f6c4fd471e71750dee77032281d9601cfe5843] <==
	E0408 18:14:31.735241       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0408 18:15:30.018719       1 conn.go:339] Error on socket receive: read tcp 192.168.39.89:8443->192.168.39.1:36564: use of closed network connection
	E0408 18:15:30.217576       1 conn.go:339] Error on socket receive: read tcp 192.168.39.89:8443->192.168.39.1:36588: use of closed network connection
	I0408 18:15:39.727144       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.72.231"}
	I0408 18:16:05.884941       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0408 18:16:06.089006       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.189.28"}
	I0408 18:16:09.299732       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0408 18:16:09.543872       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W0408 18:16:10.342893       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0408 18:16:25.050725       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0408 18:16:32.683367       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0408 18:16:36.904753       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 18:16:36.909642       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 18:16:36.971937       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 18:16:36.972156       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 18:16:37.040842       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 18:16:37.040959       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 18:16:37.070992       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 18:16:37.071049       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 18:16:37.185947       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 18:16:37.186101       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0408 18:16:38.071238       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0408 18:16:38.186766       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0408 18:16:38.194812       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0408 18:18:28.951552       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.196.243"}
	
	
	==> kube-controller-manager [d39d2dfc35152466c82799d9a0798146e2973e8301d7fbef59d57f1f49275b0f] <==
	E0408 18:17:18.735072       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0408 18:17:21.566722       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0408 18:17:21.567690       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0408 18:17:21.568582       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 18:17:21.568625       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0408 18:17:48.226107       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0408 18:17:48.227760       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0408 18:17:48.230170       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 18:17:48.230300       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0408 18:18:02.901167       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0408 18:18:02.902305       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0408 18:18:02.903334       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 18:18:02.903399       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0408 18:18:05.998544       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0408 18:18:05.999951       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0408 18:18:06.000905       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 18:18:06.001026       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0408 18:18:09.046140       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0408 18:18:09.047421       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0408 18:18:09.048430       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 18:18:09.048515       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0408 18:18:28.768221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="43.243388ms"
	I0408 18:18:28.782194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="13.882445ms"
	I0408 18:18:28.782287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="41.603µs"
	I0408 18:18:28.791536       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="55.204µs"
	
	
	==> kube-proxy [e66e5b8003279976786346c98db439a7d1286929a3c8f204e1f98324e8e0e60f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0408 18:13:55.171385       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0408 18:13:55.203667       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.89"]
	E0408 18:13:55.203736       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0408 18:13:55.296772       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0408 18:13:55.296799       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 18:13:55.296819       1 server_linux.go:170] "Using iptables Proxier"
	I0408 18:13:55.300812       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0408 18:13:55.301108       1 server.go:497] "Version info" version="v1.32.2"
	I0408 18:13:55.301121       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 18:13:55.304249       1 config.go:199] "Starting service config controller"
	I0408 18:13:55.304266       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0408 18:13:55.304313       1 config.go:105] "Starting endpoint slice config controller"
	I0408 18:13:55.304319       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0408 18:13:55.305205       1 config.go:329] "Starting node config controller"
	I0408 18:13:55.305241       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0408 18:13:55.404359       1 shared_informer.go:320] Caches are synced for service config
	I0408 18:13:55.404478       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0408 18:13:55.405303       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1526c44a6021d0c536b3b609eeec0fc22c893fa034bba4d87baa302315ce0837] <==
	W0408 18:13:44.656702       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 18:13:44.656763       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 18:13:44.657364       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 18:13:44.657416       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 18:13:45.675252       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 18:13:45.675444       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0408 18:13:45.759483       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 18:13:45.759647       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 18:13:45.761871       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 18:13:45.762158       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 18:13:45.799756       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 18:13:45.799865       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 18:13:45.816070       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 18:13:45.816173       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 18:13:45.838164       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 18:13:45.838307       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 18:13:45.848555       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 18:13:45.848763       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0408 18:13:45.854872       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 18:13:45.854974       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 18:13:45.883401       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 18:13:45.883525       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 18:13:45.945376       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 18:13:45.945439       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0408 18:13:48.652782       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 18:17:47 addons-835623 kubelet[1224]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 18:17:47 addons-835623 kubelet[1224]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 18:17:47 addons-835623 kubelet[1224]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 18:17:47 addons-835623 kubelet[1224]: E0408 18:17:47.851528    1224 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744136267850869500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 18:17:47 addons-835623 kubelet[1224]: E0408 18:17:47.851661    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744136267850869500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 18:17:57 addons-835623 kubelet[1224]: E0408 18:17:57.853812    1224 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744136277853346985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 18:17:57 addons-835623 kubelet[1224]: E0408 18:17:57.853879    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744136277853346985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 18:18:07 addons-835623 kubelet[1224]: E0408 18:18:07.856495    1224 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744136287856022056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 18:18:07 addons-835623 kubelet[1224]: E0408 18:18:07.856774    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744136287856022056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 18:18:17 addons-835623 kubelet[1224]: E0408 18:18:17.860700    1224 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744136297860193674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 18:18:17 addons-835623 kubelet[1224]: E0408 18:18:17.861015    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744136297860193674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 18:18:27 addons-835623 kubelet[1224]: E0408 18:18:27.865926    1224 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744136307865005589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 18:18:27 addons-835623 kubelet[1224]: E0408 18:18:27.865984    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744136307865005589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 08 18:18:28 addons-835623 kubelet[1224]: I0408 18:18:28.765281    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="6fc9b7f3-e419-4bea-b32b-5ede88e1b61f" containerName="liveness-probe"
	Apr 08 18:18:28 addons-835623 kubelet[1224]: I0408 18:18:28.765362    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="6fc9b7f3-e419-4bea-b32b-5ede88e1b61f" containerName="csi-external-health-monitor-controller"
	Apr 08 18:18:28 addons-835623 kubelet[1224]: I0408 18:18:28.765371    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="6fc9b7f3-e419-4bea-b32b-5ede88e1b61f" containerName="csi-provisioner"
	Apr 08 18:18:28 addons-835623 kubelet[1224]: I0408 18:18:28.765380    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="6fc9b7f3-e419-4bea-b32b-5ede88e1b61f" containerName="csi-snapshotter"
	Apr 08 18:18:28 addons-835623 kubelet[1224]: I0408 18:18:28.765385    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="0e1005c2-6ced-4c85-ba41-07ee7b5389ed" containerName="volume-snapshot-controller"
	Apr 08 18:18:28 addons-835623 kubelet[1224]: I0408 18:18:28.765390    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="f6af5f20-ae63-42dd-b6b4-d17cfbd68d54" containerName="csi-resizer"
	Apr 08 18:18:28 addons-835623 kubelet[1224]: I0408 18:18:28.765395    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="9124d159-9c86-438b-8e4a-fed6aad38850" containerName="volume-snapshot-controller"
	Apr 08 18:18:28 addons-835623 kubelet[1224]: I0408 18:18:28.765400    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="a71722cd-3be3-41de-9479-75374b22dcae" containerName="csi-attacher"
	Apr 08 18:18:28 addons-835623 kubelet[1224]: I0408 18:18:28.765404    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="6fc9b7f3-e419-4bea-b32b-5ede88e1b61f" containerName="hostpath"
	Apr 08 18:18:28 addons-835623 kubelet[1224]: I0408 18:18:28.765409    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="00a44f07-c161-4e70-bc20-d3c8230b96e5" containerName="task-pv-container"
	Apr 08 18:18:28 addons-835623 kubelet[1224]: I0408 18:18:28.765414    1224 memory_manager.go:355] "RemoveStaleState removing state" podUID="6fc9b7f3-e419-4bea-b32b-5ede88e1b61f" containerName="node-driver-registrar"
	Apr 08 18:18:28 addons-835623 kubelet[1224]: I0408 18:18:28.921977    1224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rkdn\" (UniqueName: \"kubernetes.io/projected/a7c0870f-da86-40ef-8a62-b9d02ea2c30f-kube-api-access-5rkdn\") pod \"hello-world-app-7d9564db4-rd45f\" (UID: \"a7c0870f-da86-40ef-8a62-b9d02ea2c30f\") " pod="default/hello-world-app-7d9564db4-rd45f"
	
	
	==> storage-provisioner [92203465e80c98f584410e953f6f3853d557eb0bad092a645bb1d543e876c196] <==
	I0408 18:13:59.787531       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 18:13:59.851155       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 18:13:59.851247       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 18:13:59.899409       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"772eba78-dc97-4367-ae33-334837a020cf", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-835623_eb72fdd7-b10d-48f7-b051-f2a29b6e1b2b became leader
	I0408 18:13:59.899527       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 18:13:59.899608       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-835623_eb72fdd7-b10d-48f7-b051-f2a29b6e1b2b!
	I0408 18:14:00.001360       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-835623_eb72fdd7-b10d-48f7-b051-f2a29b6e1b2b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-835623 -n addons-835623
helpers_test.go:261: (dbg) Run:  kubectl --context addons-835623 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-rd45f ingress-nginx-admission-create-pbmcw ingress-nginx-admission-patch-k5wmv
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-835623 describe pod hello-world-app-7d9564db4-rd45f ingress-nginx-admission-create-pbmcw ingress-nginx-admission-patch-k5wmv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-835623 describe pod hello-world-app-7d9564db4-rd45f ingress-nginx-admission-create-pbmcw ingress-nginx-admission-patch-k5wmv: exit status 1 (81.3235ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-rd45f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-835623/192.168.39.89
	Start Time:       Tue, 08 Apr 2025 18:18:28 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rkdn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5rkdn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-rd45f to addons-835623
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pbmcw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-k5wmv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-835623 describe pod hello-world-app-7d9564db4-rd45f ingress-nginx-admission-create-pbmcw ingress-nginx-admission-patch-k5wmv: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-835623 addons disable ingress-dns --alsologtostderr -v=1: (1.307374131s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-835623 addons disable ingress --alsologtostderr -v=1: (7.778568483s)
--- FAIL: TestAddons/parallel/Ingress (154.91s)

                                                
                                    
x
+
TestPreload (210.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-079033 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0408 19:08:30.238547  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-079033 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m14.075207966s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-079033 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-079033 image pull gcr.io/k8s-minikube/busybox: (4.014818251s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-079033
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-079033: (6.666616065s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-079033 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0408 19:10:02.985070  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:10:19.901657  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-079033 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.088003395s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-079033 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-04-08 19:10:40.682640246 +0000 UTC m=+3475.661824442
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-079033 -n test-preload-079033
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-079033 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-079033 logs -n 25: (1.173214709s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-481713 ssh -n                                                                 | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 18:54 UTC | 08 Apr 25 18:54 UTC |
	|         | multinode-481713-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-481713 ssh -n multinode-481713 sudo cat                                       | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 18:54 UTC | 08 Apr 25 18:54 UTC |
	|         | /home/docker/cp-test_multinode-481713-m03_multinode-481713.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-481713 cp multinode-481713-m03:/home/docker/cp-test.txt                       | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 18:54 UTC | 08 Apr 25 18:54 UTC |
	|         | multinode-481713-m02:/home/docker/cp-test_multinode-481713-m03_multinode-481713-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-481713 ssh -n                                                                 | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 18:54 UTC | 08 Apr 25 18:54 UTC |
	|         | multinode-481713-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-481713 ssh -n multinode-481713-m02 sudo cat                                   | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 18:54 UTC | 08 Apr 25 18:54 UTC |
	|         | /home/docker/cp-test_multinode-481713-m03_multinode-481713-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-481713 node stop m03                                                          | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 18:54 UTC | 08 Apr 25 18:54 UTC |
	| node    | multinode-481713 node start                                                             | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 18:54 UTC | 08 Apr 25 18:55 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-481713                                                                | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 18:55 UTC |                     |
	| stop    | -p multinode-481713                                                                     | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 18:55 UTC | 08 Apr 25 18:58 UTC |
	| start   | -p multinode-481713                                                                     | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 18:58 UTC | 08 Apr 25 19:01 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-481713                                                                | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 19:01 UTC |                     |
	| node    | multinode-481713 node delete                                                            | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 19:01 UTC | 08 Apr 25 19:01 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-481713 stop                                                                   | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 19:01 UTC | 08 Apr 25 19:04 UTC |
	| start   | -p multinode-481713                                                                     | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 19:04 UTC | 08 Apr 25 19:06 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-481713                                                                | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 19:06 UTC |                     |
	| start   | -p multinode-481713-m02                                                                 | multinode-481713-m02 | jenkins | v1.35.0 | 08 Apr 25 19:06 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-481713-m03                                                                 | multinode-481713-m03 | jenkins | v1.35.0 | 08 Apr 25 19:06 UTC | 08 Apr 25 19:07 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-481713                                                                 | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 19:07 UTC |                     |
	| delete  | -p multinode-481713-m03                                                                 | multinode-481713-m03 | jenkins | v1.35.0 | 08 Apr 25 19:07 UTC | 08 Apr 25 19:07 UTC |
	| delete  | -p multinode-481713                                                                     | multinode-481713     | jenkins | v1.35.0 | 08 Apr 25 19:07 UTC | 08 Apr 25 19:07 UTC |
	| start   | -p test-preload-079033                                                                  | test-preload-079033  | jenkins | v1.35.0 | 08 Apr 25 19:07 UTC | 08 Apr 25 19:09 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-079033 image pull                                                          | test-preload-079033  | jenkins | v1.35.0 | 08 Apr 25 19:09 UTC | 08 Apr 25 19:09 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-079033                                                                  | test-preload-079033  | jenkins | v1.35.0 | 08 Apr 25 19:09 UTC | 08 Apr 25 19:09 UTC |
	| start   | -p test-preload-079033                                                                  | test-preload-079033  | jenkins | v1.35.0 | 08 Apr 25 19:09 UTC | 08 Apr 25 19:10 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-079033 image list                                                          | test-preload-079033  | jenkins | v1.35.0 | 08 Apr 25 19:10 UTC | 08 Apr 25 19:10 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 19:09:38
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 19:09:38.409060  180276 out.go:345] Setting OutFile to fd 1 ...
	I0408 19:09:38.409203  180276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:09:38.409217  180276 out.go:358] Setting ErrFile to fd 2...
	I0408 19:09:38.409224  180276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:09:38.409411  180276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 19:09:38.410119  180276 out.go:352] Setting JSON to false
	I0408 19:09:38.411195  180276 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10324,"bootTime":1744129055,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 19:09:38.411324  180276 start.go:139] virtualization: kvm guest
	I0408 19:09:38.413989  180276 out.go:177] * [test-preload-079033] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 19:09:38.415754  180276 out.go:177]   - MINIKUBE_LOCATION=20604
	I0408 19:09:38.415762  180276 notify.go:220] Checking for updates...
	I0408 19:09:38.417744  180276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:09:38.419554  180276 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:09:38.421357  180276 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:09:38.423612  180276 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 19:09:38.425500  180276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:09:38.427655  180276 config.go:182] Loaded profile config "test-preload-079033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0408 19:09:38.428218  180276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:09:38.428302  180276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:09:38.445375  180276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34665
	I0408 19:09:38.446114  180276 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:09:38.446736  180276 main.go:141] libmachine: Using API Version  1
	I0408 19:09:38.446757  180276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:09:38.447201  180276 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:09:38.447471  180276 main.go:141] libmachine: (test-preload-079033) Calling .DriverName
	I0408 19:09:38.449872  180276 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0408 19:09:38.451612  180276 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 19:09:38.451973  180276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:09:38.452043  180276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:09:38.468660  180276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0408 19:09:38.469271  180276 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:09:38.469782  180276 main.go:141] libmachine: Using API Version  1
	I0408 19:09:38.469806  180276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:09:38.470809  180276 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:09:38.472568  180276 main.go:141] libmachine: (test-preload-079033) Calling .DriverName
	I0408 19:09:38.511275  180276 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 19:09:38.513108  180276 start.go:297] selected driver: kvm2
	I0408 19:09:38.513132  180276 start.go:901] validating driver "kvm2" against &{Name:test-preload-079033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-079033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:09:38.513246  180276 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:09:38.514106  180276 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:09:38.514216  180276 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20604-141129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 19:09:38.531730  180276 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 19:09:38.532614  180276 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 19:09:38.532673  180276 cni.go:84] Creating CNI manager for ""
	I0408 19:09:38.532709  180276 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:09:38.532774  180276 start.go:340] cluster config:
	{Name:test-preload-079033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-079033 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:09:38.532890  180276 iso.go:125] acquiring lock: {Name:mk6f89956dcd0ccd06b3c273592988c0e077c69a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:09:38.534713  180276 out.go:177] * Starting "test-preload-079033" primary control-plane node in "test-preload-079033" cluster
	I0408 19:09:38.536064  180276 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0408 19:09:38.561101  180276 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0408 19:09:38.561143  180276 cache.go:56] Caching tarball of preloaded images
	I0408 19:09:38.561328  180276 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0408 19:09:38.563369  180276 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0408 19:09:38.564903  180276 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0408 19:09:38.588137  180276 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0408 19:09:41.317174  180276 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0408 19:09:41.317289  180276 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0408 19:09:42.198337  180276 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0408 19:09:42.198465  180276 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/config.json ...
	I0408 19:09:42.198693  180276 start.go:360] acquireMachinesLock for test-preload-079033: {Name:mk9f7a747fe5c51efa93431b771c455683360918 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 19:09:42.198761  180276 start.go:364] duration metric: took 45.105µs to acquireMachinesLock for "test-preload-079033"
	I0408 19:09:42.198778  180276 start.go:96] Skipping create...Using existing machine configuration
	I0408 19:09:42.198783  180276 fix.go:54] fixHost starting: 
	I0408 19:09:42.199033  180276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:09:42.199073  180276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:09:42.215181  180276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32937
	I0408 19:09:42.215699  180276 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:09:42.216241  180276 main.go:141] libmachine: Using API Version  1
	I0408 19:09:42.216264  180276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:09:42.216645  180276 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:09:42.216882  180276 main.go:141] libmachine: (test-preload-079033) Calling .DriverName
	I0408 19:09:42.217032  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetState
	I0408 19:09:42.218926  180276 fix.go:112] recreateIfNeeded on test-preload-079033: state=Stopped err=<nil>
	I0408 19:09:42.218978  180276 main.go:141] libmachine: (test-preload-079033) Calling .DriverName
	W0408 19:09:42.219168  180276 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 19:09:42.221548  180276 out.go:177] * Restarting existing kvm2 VM for "test-preload-079033" ...
	I0408 19:09:42.223602  180276 main.go:141] libmachine: (test-preload-079033) Calling .Start
	I0408 19:09:42.223964  180276 main.go:141] libmachine: (test-preload-079033) starting domain...
	I0408 19:09:42.224010  180276 main.go:141] libmachine: (test-preload-079033) ensuring networks are active...
	I0408 19:09:42.225083  180276 main.go:141] libmachine: (test-preload-079033) Ensuring network default is active
	I0408 19:09:42.225524  180276 main.go:141] libmachine: (test-preload-079033) Ensuring network mk-test-preload-079033 is active
	I0408 19:09:42.225936  180276 main.go:141] libmachine: (test-preload-079033) getting domain XML...
	I0408 19:09:42.226758  180276 main.go:141] libmachine: (test-preload-079033) creating domain...
	I0408 19:09:43.531975  180276 main.go:141] libmachine: (test-preload-079033) waiting for IP...
	I0408 19:09:43.532839  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:43.533376  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:43.533466  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:43.533363  180329 retry.go:31] will retry after 227.920749ms: waiting for domain to come up
	I0408 19:09:43.763091  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:43.763704  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:43.763764  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:43.763668  180329 retry.go:31] will retry after 289.28479ms: waiting for domain to come up
	I0408 19:09:44.054448  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:44.054847  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:44.054918  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:44.054858  180329 retry.go:31] will retry after 439.105533ms: waiting for domain to come up
	I0408 19:09:44.495730  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:44.496329  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:44.496412  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:44.496319  180329 retry.go:31] will retry after 394.836011ms: waiting for domain to come up
	I0408 19:09:44.893249  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:44.893790  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:44.893828  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:44.893754  180329 retry.go:31] will retry after 585.874159ms: waiting for domain to come up
	I0408 19:09:45.481958  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:45.482537  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:45.482605  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:45.482519  180329 retry.go:31] will retry after 714.119187ms: waiting for domain to come up
	I0408 19:09:46.198773  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:46.199357  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:46.199383  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:46.199321  180329 retry.go:31] will retry after 969.462568ms: waiting for domain to come up
	I0408 19:09:47.170665  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:47.171052  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:47.171079  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:47.171004  180329 retry.go:31] will retry after 1.362191819s: waiting for domain to come up
	I0408 19:09:48.535077  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:48.535425  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:48.535453  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:48.535402  180329 retry.go:31] will retry after 1.724810951s: waiting for domain to come up
	I0408 19:09:50.262523  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:50.263102  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:50.263121  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:50.263057  180329 retry.go:31] will retry after 2.085576607s: waiting for domain to come up
	I0408 19:09:52.350684  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:52.351240  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:52.351274  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:52.351206  180329 retry.go:31] will retry after 2.260693161s: waiting for domain to come up
	I0408 19:09:54.614786  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:54.615455  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:54.615512  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:54.615414  180329 retry.go:31] will retry after 2.519257017s: waiting for domain to come up
	I0408 19:09:57.138366  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:09:57.138925  180276 main.go:141] libmachine: (test-preload-079033) DBG | unable to find current IP address of domain test-preload-079033 in network mk-test-preload-079033
	I0408 19:09:57.138959  180276 main.go:141] libmachine: (test-preload-079033) DBG | I0408 19:09:57.138868  180329 retry.go:31] will retry after 3.675468042s: waiting for domain to come up
	I0408 19:10:00.818414  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:00.818939  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has current primary IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:00.818967  180276 main.go:141] libmachine: (test-preload-079033) found domain IP: 192.168.39.253
	I0408 19:10:00.819029  180276 main.go:141] libmachine: (test-preload-079033) reserving static IP address...
	I0408 19:10:00.819515  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "test-preload-079033", mac: "52:54:00:8f:f4:b6", ip: "192.168.39.253"} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:00.819535  180276 main.go:141] libmachine: (test-preload-079033) reserved static IP address 192.168.39.253 for domain test-preload-079033
	I0408 19:10:00.819547  180276 main.go:141] libmachine: (test-preload-079033) DBG | skip adding static IP to network mk-test-preload-079033 - found existing host DHCP lease matching {name: "test-preload-079033", mac: "52:54:00:8f:f4:b6", ip: "192.168.39.253"}
	I0408 19:10:00.819561  180276 main.go:141] libmachine: (test-preload-079033) DBG | Getting to WaitForSSH function...
	I0408 19:10:00.819571  180276 main.go:141] libmachine: (test-preload-079033) waiting for SSH...
	I0408 19:10:00.822280  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:00.822750  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:00.822783  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:00.822987  180276 main.go:141] libmachine: (test-preload-079033) DBG | Using SSH client type: external
	I0408 19:10:00.823018  180276 main.go:141] libmachine: (test-preload-079033) DBG | Using SSH private key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/test-preload-079033/id_rsa (-rw-------)
	I0408 19:10:00.823056  180276 main.go:141] libmachine: (test-preload-079033) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20604-141129/.minikube/machines/test-preload-079033/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 19:10:00.823073  180276 main.go:141] libmachine: (test-preload-079033) DBG | About to run SSH command:
	I0408 19:10:00.823089  180276 main.go:141] libmachine: (test-preload-079033) DBG | exit 0
	I0408 19:10:00.950533  180276 main.go:141] libmachine: (test-preload-079033) DBG | SSH cmd err, output: <nil>: 
	I0408 19:10:00.950979  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetConfigRaw
	I0408 19:10:00.951832  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetIP
	I0408 19:10:00.955234  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:00.955727  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:00.955766  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:00.956135  180276 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/config.json ...
	I0408 19:10:00.956405  180276 machine.go:93] provisionDockerMachine start ...
	I0408 19:10:00.956431  180276 main.go:141] libmachine: (test-preload-079033) Calling .DriverName
	I0408 19:10:00.956786  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHHostname
	I0408 19:10:00.960236  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:00.960859  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:00.960890  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:00.961225  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHPort
	I0408 19:10:00.961506  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:00.961740  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:00.961939  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHUsername
	I0408 19:10:00.962198  180276 main.go:141] libmachine: Using SSH client type: native
	I0408 19:10:00.962753  180276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0408 19:10:00.962773  180276 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 19:10:01.074878  180276 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 19:10:01.074909  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetMachineName
	I0408 19:10:01.075204  180276 buildroot.go:166] provisioning hostname "test-preload-079033"
	I0408 19:10:01.075254  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetMachineName
	I0408 19:10:01.075606  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHHostname
	I0408 19:10:01.080472  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:01.081129  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:01.081192  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:01.081430  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHPort
	I0408 19:10:01.081719  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:01.082006  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:01.082238  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHUsername
	I0408 19:10:01.082525  180276 main.go:141] libmachine: Using SSH client type: native
	I0408 19:10:01.082756  180276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0408 19:10:01.082770  180276 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-079033 && echo "test-preload-079033" | sudo tee /etc/hostname
	I0408 19:10:01.208065  180276 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-079033
	
	I0408 19:10:01.208100  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHHostname
	I0408 19:10:01.212043  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:01.212512  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:01.212530  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:01.212801  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHPort
	I0408 19:10:01.213037  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:01.213198  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:01.213387  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHUsername
	I0408 19:10:01.213590  180276 main.go:141] libmachine: Using SSH client type: native
	I0408 19:10:01.213904  180276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0408 19:10:01.213928  180276 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-079033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-079033/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-079033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 19:10:01.335530  180276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:10:01.335576  180276 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20604-141129/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-141129/.minikube}
	I0408 19:10:01.335596  180276 buildroot.go:174] setting up certificates
	I0408 19:10:01.335605  180276 provision.go:84] configureAuth start
	I0408 19:10:01.335614  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetMachineName
	I0408 19:10:01.335920  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetIP
	I0408 19:10:01.339352  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:01.339927  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:01.339967  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:01.340299  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHHostname
	I0408 19:10:01.343714  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:01.344188  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:01.344218  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:01.344377  180276 provision.go:143] copyHostCerts
	I0408 19:10:01.344451  180276 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem, removing ...
	I0408 19:10:01.344470  180276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem
	I0408 19:10:01.344536  180276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem (1123 bytes)
	I0408 19:10:01.344623  180276 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem, removing ...
	I0408 19:10:01.344630  180276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem
	I0408 19:10:01.344655  180276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem (1679 bytes)
	I0408 19:10:01.344710  180276 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem, removing ...
	I0408 19:10:01.344717  180276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem
	I0408 19:10:01.344736  180276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem (1082 bytes)
	I0408 19:10:01.344783  180276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem org=jenkins.test-preload-079033 san=[127.0.0.1 192.168.39.253 localhost minikube test-preload-079033]
	I0408 19:10:01.684761  180276 provision.go:177] copyRemoteCerts
	I0408 19:10:01.684834  180276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 19:10:01.684866  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHHostname
	I0408 19:10:01.688199  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:01.688761  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:01.688798  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:01.689050  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHPort
	I0408 19:10:01.689347  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:01.689604  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHUsername
	I0408 19:10:01.689817  180276 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/test-preload-079033/id_rsa Username:docker}
	I0408 19:10:01.777676  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 19:10:01.805882  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 19:10:01.831586  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 19:10:01.858777  180276 provision.go:87] duration metric: took 523.158975ms to configureAuth
	I0408 19:10:01.858807  180276 buildroot.go:189] setting minikube options for container-runtime
	I0408 19:10:01.858991  180276 config.go:182] Loaded profile config "test-preload-079033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0408 19:10:01.859073  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHHostname
	I0408 19:10:01.862767  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:01.863176  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:01.863225  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:01.863495  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHPort
	I0408 19:10:01.863727  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:01.863875  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:01.864052  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHUsername
	I0408 19:10:01.864271  180276 main.go:141] libmachine: Using SSH client type: native
	I0408 19:10:01.864568  180276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0408 19:10:01.864594  180276 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 19:10:02.096422  180276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 19:10:02.096519  180276 machine.go:96] duration metric: took 1.140096831s to provisionDockerMachine
	I0408 19:10:02.096535  180276 start.go:293] postStartSetup for "test-preload-079033" (driver="kvm2")
	I0408 19:10:02.096546  180276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 19:10:02.096566  180276 main.go:141] libmachine: (test-preload-079033) Calling .DriverName
	I0408 19:10:02.096920  180276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 19:10:02.096976  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHHostname
	I0408 19:10:02.100503  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:02.100982  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:02.101015  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:02.101240  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHPort
	I0408 19:10:02.101431  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:02.101582  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHUsername
	I0408 19:10:02.101873  180276 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/test-preload-079033/id_rsa Username:docker}
	I0408 19:10:02.190398  180276 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 19:10:02.195561  180276 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 19:10:02.195592  180276 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/addons for local assets ...
	I0408 19:10:02.195670  180276 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/files for local assets ...
	I0408 19:10:02.195752  180276 filesync.go:149] local asset: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem -> 1484872.pem in /etc/ssl/certs
	I0408 19:10:02.195848  180276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 19:10:02.208739  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:10:02.236950  180276 start.go:296] duration metric: took 140.398286ms for postStartSetup
	I0408 19:10:02.236999  180276 fix.go:56] duration metric: took 20.038213593s for fixHost
	I0408 19:10:02.237027  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHHostname
	I0408 19:10:02.240619  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:02.240950  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:02.240981  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:02.241196  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHPort
	I0408 19:10:02.241406  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:02.241612  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:02.241754  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHUsername
	I0408 19:10:02.241997  180276 main.go:141] libmachine: Using SSH client type: native
	I0408 19:10:02.242202  180276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0408 19:10:02.242216  180276 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 19:10:02.355411  180276 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744139402.328960974
	
	I0408 19:10:02.355442  180276 fix.go:216] guest clock: 1744139402.328960974
	I0408 19:10:02.355459  180276 fix.go:229] Guest: 2025-04-08 19:10:02.328960974 +0000 UTC Remote: 2025-04-08 19:10:02.237004736 +0000 UTC m=+23.870035637 (delta=91.956238ms)
	I0408 19:10:02.355487  180276 fix.go:200] guest clock delta is within tolerance: 91.956238ms
	I0408 19:10:02.355493  180276 start.go:83] releasing machines lock for "test-preload-079033", held for 20.156721412s
	I0408 19:10:02.355515  180276 main.go:141] libmachine: (test-preload-079033) Calling .DriverName
	I0408 19:10:02.355913  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetIP
	I0408 19:10:02.360266  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:02.361030  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:02.361061  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:02.361300  180276 main.go:141] libmachine: (test-preload-079033) Calling .DriverName
	I0408 19:10:02.362016  180276 main.go:141] libmachine: (test-preload-079033) Calling .DriverName
	I0408 19:10:02.362259  180276 main.go:141] libmachine: (test-preload-079033) Calling .DriverName
	I0408 19:10:02.362364  180276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 19:10:02.362409  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHHostname
	I0408 19:10:02.362542  180276 ssh_runner.go:195] Run: cat /version.json
	I0408 19:10:02.362576  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHHostname
	I0408 19:10:02.365548  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:02.365717  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:02.365937  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:02.365978  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:02.366160  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHPort
	I0408 19:10:02.366313  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:02.366342  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:02.366437  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:02.366555  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHPort
	I0408 19:10:02.366633  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHUsername
	I0408 19:10:02.366815  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:02.366811  180276 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/test-preload-079033/id_rsa Username:docker}
	I0408 19:10:02.367047  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHUsername
	I0408 19:10:02.367264  180276 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/test-preload-079033/id_rsa Username:docker}
	I0408 19:10:02.478836  180276 ssh_runner.go:195] Run: systemctl --version
	I0408 19:10:02.485644  180276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 19:10:02.634884  180276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 19:10:02.641907  180276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 19:10:02.642004  180276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 19:10:02.661773  180276 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 19:10:02.661802  180276 start.go:495] detecting cgroup driver to use...
	I0408 19:10:02.661905  180276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 19:10:02.683843  180276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 19:10:02.702879  180276 docker.go:217] disabling cri-docker service (if available) ...
	I0408 19:10:02.702953  180276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 19:10:02.719942  180276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 19:10:02.737010  180276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 19:10:02.857543  180276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 19:10:03.021725  180276 docker.go:233] disabling docker service ...
	I0408 19:10:03.021797  180276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 19:10:03.037114  180276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 19:10:03.051794  180276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 19:10:03.182807  180276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 19:10:03.316126  180276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 19:10:03.331595  180276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 19:10:03.352968  180276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0408 19:10:03.353036  180276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:10:03.364952  180276 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 19:10:03.365035  180276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:10:03.377487  180276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:10:03.389672  180276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:10:03.402459  180276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 19:10:03.417072  180276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:10:03.428625  180276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:10:03.448367  180276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:10:03.461126  180276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 19:10:03.473038  180276 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 19:10:03.473116  180276 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 19:10:03.488671  180276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 19:10:03.504606  180276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:10:03.633354  180276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 19:10:03.728895  180276 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 19:10:03.728978  180276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 19:10:03.734711  180276 start.go:563] Will wait 60s for crictl version
	I0408 19:10:03.734785  180276 ssh_runner.go:195] Run: which crictl
	I0408 19:10:03.739485  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 19:10:03.776902  180276 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 19:10:03.777005  180276 ssh_runner.go:195] Run: crio --version
	I0408 19:10:03.808070  180276 ssh_runner.go:195] Run: crio --version
	I0408 19:10:03.846060  180276 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0408 19:10:03.847912  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetIP
	I0408 19:10:03.851586  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:03.852172  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:03.852213  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:03.852593  180276 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 19:10:03.858152  180276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:10:03.873621  180276 kubeadm.go:883] updating cluster {Name:test-preload-079033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-079033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 19:10:03.873764  180276 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0408 19:10:03.873813  180276 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:10:03.919207  180276 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0408 19:10:03.919335  180276 ssh_runner.go:195] Run: which lz4
	I0408 19:10:03.925112  180276 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 19:10:03.931477  180276 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 19:10:03.931558  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0408 19:10:05.674256  180276 crio.go:462] duration metric: took 1.749176576s to copy over tarball
	I0408 19:10:05.674345  180276 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 19:10:08.442227  180276 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.767843216s)
	I0408 19:10:08.442257  180276 crio.go:469] duration metric: took 2.767960522s to extract the tarball
	I0408 19:10:08.442265  180276 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 19:10:08.486528  180276 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:10:08.533394  180276 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0408 19:10:08.533421  180276 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 19:10:08.533511  180276 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:10:08.533533  180276 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0408 19:10:08.533580  180276 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0408 19:10:08.533596  180276 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0408 19:10:08.533605  180276 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0408 19:10:08.533585  180276 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0408 19:10:08.533649  180276 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0408 19:10:08.533656  180276 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 19:10:08.535467  180276 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0408 19:10:08.535462  180276 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0408 19:10:08.535462  180276 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0408 19:10:08.535466  180276 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0408 19:10:08.535473  180276 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 19:10:08.535471  180276 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:10:08.535462  180276 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0408 19:10:08.535861  180276 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0408 19:10:08.675486  180276 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0408 19:10:08.677341  180276 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0408 19:10:08.690358  180276 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0408 19:10:08.691098  180276 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0408 19:10:08.692940  180276 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0408 19:10:08.718564  180276 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0408 19:10:08.719944  180276 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0408 19:10:08.776047  180276 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0408 19:10:08.776102  180276 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0408 19:10:08.776176  180276 ssh_runner.go:195] Run: which crictl
	I0408 19:10:08.843276  180276 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0408 19:10:08.843323  180276 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0408 19:10:08.843448  180276 ssh_runner.go:195] Run: which crictl
	I0408 19:10:08.891418  180276 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0408 19:10:08.891459  180276 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0408 19:10:08.891484  180276 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0408 19:10:08.891495  180276 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0408 19:10:08.891543  180276 ssh_runner.go:195] Run: which crictl
	I0408 19:10:08.891543  180276 ssh_runner.go:195] Run: which crictl
	I0408 19:10:08.891682  180276 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0408 19:10:08.891726  180276 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0408 19:10:08.891777  180276 ssh_runner.go:195] Run: which crictl
	I0408 19:10:08.891778  180276 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0408 19:10:08.891893  180276 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0408 19:10:08.891930  180276 ssh_runner.go:195] Run: which crictl
	I0408 19:10:08.902051  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0408 19:10:08.902160  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0408 19:10:08.902050  180276 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0408 19:10:08.902273  180276 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0408 19:10:08.902320  180276 ssh_runner.go:195] Run: which crictl
	I0408 19:10:08.912889  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0408 19:10:08.912975  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0408 19:10:08.913119  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0408 19:10:08.913236  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0408 19:10:08.928063  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0408 19:10:09.074590  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0408 19:10:09.074654  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0408 19:10:09.074778  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0408 19:10:09.074828  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0408 19:10:09.074860  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0408 19:10:09.074931  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0408 19:10:09.100099  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0408 19:10:09.219776  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0408 19:10:09.241194  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0408 19:10:09.258984  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0408 19:10:09.259051  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0408 19:10:09.259132  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0408 19:10:09.259185  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0408 19:10:09.288732  180276 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0408 19:10:09.344723  180276 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0408 19:10:09.344838  180276 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0408 19:10:09.367729  180276 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0408 19:10:09.367850  180276 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0408 19:10:09.415396  180276 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0408 19:10:09.415531  180276 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0408 19:10:09.418031  180276 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0408 19:10:09.418088  180276 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0408 19:10:09.418162  180276 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0408 19:10:09.418037  180276 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0408 19:10:09.418290  180276 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0408 19:10:09.418170  180276 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0408 19:10:09.437563  180276 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0408 19:10:09.437633  180276 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0408 19:10:09.437654  180276 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0408 19:10:09.437702  180276 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0408 19:10:09.437704  180276 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0408 19:10:09.437752  180276 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0408 19:10:09.437779  180276 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0408 19:10:09.438323  180276 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0408 19:10:10.341048  180276 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:10:11.493181  180276 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.074856791s)
	I0408 19:10:11.493238  180276 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0408 19:10:11.493186  180276 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.074835324s)
	I0408 19:10:11.493251  180276 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.055531432s)
	I0408 19:10:11.493255  180276 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0408 19:10:11.493256  180276 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.055468588s)
	I0408 19:10:11.493267  180276 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0408 19:10:11.493274  180276 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.152195626s)
	I0408 19:10:11.493281  180276 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0408 19:10:11.493309  180276 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0408 19:10:11.493389  180276 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0408 19:10:12.948264  180276 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (1.454843372s)
	I0408 19:10:12.948301  180276 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0408 19:10:12.948332  180276 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0408 19:10:12.948379  180276 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0408 19:10:15.104517  180276 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.156108572s)
	I0408 19:10:15.104559  180276 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0408 19:10:15.104592  180276 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0408 19:10:15.104646  180276 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0408 19:10:15.557081  180276 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0408 19:10:15.557136  180276 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0408 19:10:15.557208  180276 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0408 19:10:16.006007  180276 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0408 19:10:16.006054  180276 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0408 19:10:16.006106  180276 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0408 19:10:16.750284  180276 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0408 19:10:16.750336  180276 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0408 19:10:16.750391  180276 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0408 19:10:17.596021  180276 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0408 19:10:17.596069  180276 cache_images.go:123] Successfully loaded all cached images
	I0408 19:10:17.596078  180276 cache_images.go:92] duration metric: took 9.062641485s to LoadCachedImages
	I0408 19:10:17.596092  180276 kubeadm.go:934] updating node { 192.168.39.253 8443 v1.24.4 crio true true} ...
	I0408 19:10:17.596202  180276 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-079033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-079033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 19:10:17.596272  180276 ssh_runner.go:195] Run: crio config
	I0408 19:10:17.647716  180276 cni.go:84] Creating CNI manager for ""
	I0408 19:10:17.647745  180276 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:10:17.647756  180276 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 19:10:17.647775  180276 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.253 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-079033 NodeName:test-preload-079033 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 19:10:17.647901  180276 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-079033"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 19:10:17.647974  180276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0408 19:10:17.659401  180276 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 19:10:17.659491  180276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 19:10:17.672255  180276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0408 19:10:17.691255  180276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 19:10:17.708110  180276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0408 19:10:17.727492  180276 ssh_runner.go:195] Run: grep 192.168.39.253	control-plane.minikube.internal$ /etc/hosts
	I0408 19:10:17.732001  180276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:10:17.745902  180276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:10:17.868813  180276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:10:17.888078  180276 certs.go:68] Setting up /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033 for IP: 192.168.39.253
	I0408 19:10:17.888116  180276 certs.go:194] generating shared ca certs ...
	I0408 19:10:17.888143  180276 certs.go:226] acquiring lock for ca certs: {Name:mkd37ce74a5e6f5f5300314397402f7d571fc230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:10:17.888346  180276 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key
	I0408 19:10:17.888406  180276 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key
	I0408 19:10:17.888418  180276 certs.go:256] generating profile certs ...
	I0408 19:10:17.888527  180276 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/client.key
	I0408 19:10:17.888610  180276 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/apiserver.key.e9d1bdb2
	I0408 19:10:17.888671  180276 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/proxy-client.key
	I0408 19:10:17.888806  180276 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem (1338 bytes)
	W0408 19:10:17.888840  180276 certs.go:480] ignoring /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487_empty.pem, impossibly tiny 0 bytes
	I0408 19:10:17.888847  180276 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 19:10:17.888868  180276 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem (1082 bytes)
	I0408 19:10:17.888887  180276 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem (1123 bytes)
	I0408 19:10:17.888906  180276 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem (1679 bytes)
	I0408 19:10:17.888942  180276 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:10:17.889635  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 19:10:17.922764  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 19:10:17.948534  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 19:10:18.011696  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 19:10:18.041072  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0408 19:10:18.077494  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 19:10:18.110982  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 19:10:18.146166  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 19:10:18.172596  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem --> /usr/share/ca-certificates/148487.pem (1338 bytes)
	I0408 19:10:18.200038  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /usr/share/ca-certificates/1484872.pem (1708 bytes)
	I0408 19:10:18.225757  180276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 19:10:18.252400  180276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 19:10:18.271953  180276 ssh_runner.go:195] Run: openssl version
	I0408 19:10:18.278730  180276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1484872.pem && ln -fs /usr/share/ca-certificates/1484872.pem /etc/ssl/certs/1484872.pem"
	I0408 19:10:18.291378  180276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1484872.pem
	I0408 19:10:18.296664  180276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 18:21 /usr/share/ca-certificates/1484872.pem
	I0408 19:10:18.296735  180276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1484872.pem
	I0408 19:10:18.303373  180276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1484872.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 19:10:18.316403  180276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 19:10:18.328946  180276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:10:18.334261  180276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:10:18.334346  180276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:10:18.340439  180276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 19:10:18.353616  180276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148487.pem && ln -fs /usr/share/ca-certificates/148487.pem /etc/ssl/certs/148487.pem"
	I0408 19:10:18.366001  180276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148487.pem
	I0408 19:10:18.371258  180276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 18:21 /usr/share/ca-certificates/148487.pem
	I0408 19:10:18.371329  180276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148487.pem
	I0408 19:10:18.377826  180276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/148487.pem /etc/ssl/certs/51391683.0"
	I0408 19:10:18.391312  180276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 19:10:18.396610  180276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 19:10:18.403161  180276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 19:10:18.410305  180276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 19:10:18.418309  180276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 19:10:18.425202  180276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 19:10:18.432138  180276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 19:10:18.439141  180276 kubeadm.go:392] StartCluster: {Name:test-preload-079033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
079033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:10:18.439311  180276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 19:10:18.439378  180276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:10:18.485091  180276 cri.go:89] found id: ""
	I0408 19:10:18.485168  180276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 19:10:18.497935  180276 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0408 19:10:18.497960  180276 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0408 19:10:18.498043  180276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 19:10:18.510734  180276 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 19:10:18.511161  180276 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-079033" does not appear in /home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:10:18.511285  180276 kubeconfig.go:62] /home/jenkins/minikube-integration/20604-141129/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-079033" cluster setting kubeconfig missing "test-preload-079033" context setting]
	I0408 19:10:18.511553  180276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/kubeconfig: {Name:mk9a380edcf1115627e95ec52acade4ebe48201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:10:18.512095  180276 kapi.go:59] client config for test-preload-079033: &rest.Config{Host:"https://192.168.39.253:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/client.crt", KeyFile:"/home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/client.key", CAFile:"/home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 19:10:18.512532  180276 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0408 19:10:18.512552  180276 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0408 19:10:18.512559  180276 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0408 19:10:18.512564  180276 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0408 19:10:18.512934  180276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 19:10:18.524738  180276 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.253
	I0408 19:10:18.524788  180276 kubeadm.go:1160] stopping kube-system containers ...
	I0408 19:10:18.524806  180276 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 19:10:18.524872  180276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:10:18.569891  180276 cri.go:89] found id: ""
	I0408 19:10:18.569978  180276 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 19:10:18.587257  180276 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:10:18.597249  180276 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:10:18.597277  180276 kubeadm.go:157] found existing configuration files:
	
	I0408 19:10:18.597335  180276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:10:18.606704  180276 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:10:18.606763  180276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:10:18.617141  180276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:10:18.627456  180276 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:10:18.627538  180276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:10:18.638330  180276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:10:18.648679  180276 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:10:18.648770  180276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:10:18.659474  180276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:10:18.669014  180276 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:10:18.669077  180276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:10:18.678867  180276 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 19:10:18.689051  180276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:10:18.809776  180276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:10:19.596212  180276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:10:19.875124  180276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:10:19.938341  180276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:10:20.011457  180276 api_server.go:52] waiting for apiserver process to appear ...
	I0408 19:10:20.011551  180276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:10:20.512565  180276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:10:21.012404  180276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:10:21.029125  180276 api_server.go:72] duration metric: took 1.017665772s to wait for apiserver process to appear ...
	I0408 19:10:21.029161  180276 api_server.go:88] waiting for apiserver healthz status ...
	I0408 19:10:21.029190  180276 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0408 19:10:21.029664  180276 api_server.go:269] stopped: https://192.168.39.253:8443/healthz: Get "https://192.168.39.253:8443/healthz": dial tcp 192.168.39.253:8443: connect: connection refused
	I0408 19:10:21.529355  180276 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0408 19:10:21.530186  180276 api_server.go:269] stopped: https://192.168.39.253:8443/healthz: Get "https://192.168.39.253:8443/healthz": dial tcp 192.168.39.253:8443: connect: connection refused
	I0408 19:10:22.029944  180276 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0408 19:10:24.851567  180276 api_server.go:279] https://192.168.39.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 19:10:24.851606  180276 api_server.go:103] status: https://192.168.39.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 19:10:24.851633  180276 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0408 19:10:24.878192  180276 api_server.go:279] https://192.168.39.253:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 19:10:24.878228  180276 api_server.go:103] status: https://192.168.39.253:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 19:10:25.029670  180276 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0408 19:10:25.037203  180276 api_server.go:279] https://192.168.39.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 19:10:25.037239  180276 api_server.go:103] status: https://192.168.39.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 19:10:25.530048  180276 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0408 19:10:25.535972  180276 api_server.go:279] https://192.168.39.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 19:10:25.536007  180276 api_server.go:103] status: https://192.168.39.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 19:10:26.029673  180276 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0408 19:10:26.048532  180276 api_server.go:279] https://192.168.39.253:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 19:10:26.048566  180276 api_server.go:103] status: https://192.168.39.253:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 19:10:26.530349  180276 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0408 19:10:26.537466  180276 api_server.go:279] https://192.168.39.253:8443/healthz returned 200:
	ok
	I0408 19:10:26.544377  180276 api_server.go:141] control plane version: v1.24.4
	I0408 19:10:26.544413  180276 api_server.go:131] duration metric: took 5.51524357s to wait for apiserver health ...
	I0408 19:10:26.544423  180276 cni.go:84] Creating CNI manager for ""
	I0408 19:10:26.544429  180276 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:10:26.546688  180276 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 19:10:26.548956  180276 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 19:10:26.559999  180276 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 19:10:26.579349  180276 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 19:10:26.585425  180276 system_pods.go:59] 7 kube-system pods found
	I0408 19:10:26.585467  180276 system_pods.go:61] "coredns-6d4b75cb6d-t78gc" [2be47ad1-05bd-40cf-885a-e925082664b7] Running
	I0408 19:10:26.585475  180276 system_pods.go:61] "etcd-test-preload-079033" [a65ad59d-ea1b-449a-af0f-2cf49f628d9f] Running
	I0408 19:10:26.585480  180276 system_pods.go:61] "kube-apiserver-test-preload-079033" [358ef9db-d615-4d0a-99a2-72ea069b4f67] Running
	I0408 19:10:26.585491  180276 system_pods.go:61] "kube-controller-manager-test-preload-079033" [e84c887a-80ff-4fa6-b811-f358b77af68a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 19:10:26.585500  180276 system_pods.go:61] "kube-proxy-8958v" [c2304682-1e91-43f5-aaa8-9b71c85e3cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0408 19:10:26.585514  180276 system_pods.go:61] "kube-scheduler-test-preload-079033" [503ada9a-6197-4f29-b7ca-6f150dd212c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 19:10:26.585525  180276 system_pods.go:61] "storage-provisioner" [91503bfb-3738-4dda-baf4-fa05ef756650] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0408 19:10:26.585535  180276 system_pods.go:74] duration metric: took 6.158856ms to wait for pod list to return data ...
	I0408 19:10:26.585561  180276 node_conditions.go:102] verifying NodePressure condition ...
	I0408 19:10:26.588384  180276 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 19:10:26.588417  180276 node_conditions.go:123] node cpu capacity is 2
	I0408 19:10:26.588436  180276 node_conditions.go:105] duration metric: took 2.865962ms to run NodePressure ...
	I0408 19:10:26.588461  180276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:10:26.823171  180276 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0408 19:10:26.828609  180276 kubeadm.go:739] kubelet initialised
	I0408 19:10:26.828645  180276 kubeadm.go:740] duration metric: took 5.433369ms waiting for restarted kubelet to initialise ...
	I0408 19:10:26.828659  180276 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 19:10:26.837149  180276 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-t78gc" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:26.846747  180276 pod_ready.go:98] node "test-preload-079033" hosting pod "coredns-6d4b75cb6d-t78gc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:26.846786  180276 pod_ready.go:82] duration metric: took 9.594125ms for pod "coredns-6d4b75cb6d-t78gc" in "kube-system" namespace to be "Ready" ...
	E0408 19:10:26.846800  180276 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-079033" hosting pod "coredns-6d4b75cb6d-t78gc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:26.846810  180276 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:26.853096  180276 pod_ready.go:98] node "test-preload-079033" hosting pod "etcd-test-preload-079033" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:26.853125  180276 pod_ready.go:82] duration metric: took 6.307625ms for pod "etcd-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	E0408 19:10:26.853137  180276 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-079033" hosting pod "etcd-test-preload-079033" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:26.853144  180276 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:26.859550  180276 pod_ready.go:98] node "test-preload-079033" hosting pod "kube-apiserver-test-preload-079033" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:26.859593  180276 pod_ready.go:82] duration metric: took 6.436338ms for pod "kube-apiserver-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	E0408 19:10:26.859608  180276 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-079033" hosting pod "kube-apiserver-test-preload-079033" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:26.859618  180276 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:26.983631  180276 pod_ready.go:98] node "test-preload-079033" hosting pod "kube-controller-manager-test-preload-079033" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:26.983684  180276 pod_ready.go:82] duration metric: took 124.043465ms for pod "kube-controller-manager-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	E0408 19:10:26.983701  180276 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-079033" hosting pod "kube-controller-manager-test-preload-079033" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:26.983712  180276 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8958v" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:27.385684  180276 pod_ready.go:98] node "test-preload-079033" hosting pod "kube-proxy-8958v" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:27.385720  180276 pod_ready.go:82] duration metric: took 401.994981ms for pod "kube-proxy-8958v" in "kube-system" namespace to be "Ready" ...
	E0408 19:10:27.385735  180276 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-079033" hosting pod "kube-proxy-8958v" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:27.385745  180276 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:27.784330  180276 pod_ready.go:98] node "test-preload-079033" hosting pod "kube-scheduler-test-preload-079033" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:27.784380  180276 pod_ready.go:82] duration metric: took 398.625443ms for pod "kube-scheduler-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	E0408 19:10:27.784396  180276 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-079033" hosting pod "kube-scheduler-test-preload-079033" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:27.784407  180276 pod_ready.go:39] duration metric: took 955.735536ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 19:10:27.784433  180276 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 19:10:27.797336  180276 ops.go:34] apiserver oom_adj: -16
	I0408 19:10:27.797366  180276 kubeadm.go:597] duration metric: took 9.299398118s to restartPrimaryControlPlane
	I0408 19:10:27.797378  180276 kubeadm.go:394] duration metric: took 9.358248645s to StartCluster
	I0408 19:10:27.797401  180276 settings.go:142] acquiring lock: {Name:mk8d530f6b8ad949177759460b330a3d74710125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:10:27.797485  180276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:10:27.798584  180276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/kubeconfig: {Name:mk9a380edcf1115627e95ec52acade4ebe48201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:10:27.798867  180276 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 19:10:27.799135  180276 config.go:182] Loaded profile config "test-preload-079033": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0408 19:10:27.799082  180276 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0408 19:10:27.799212  180276 addons.go:69] Setting default-storageclass=true in profile "test-preload-079033"
	I0408 19:10:27.799255  180276 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-079033"
	I0408 19:10:27.799212  180276 addons.go:69] Setting storage-provisioner=true in profile "test-preload-079033"
	I0408 19:10:27.799331  180276 addons.go:238] Setting addon storage-provisioner=true in "test-preload-079033"
	W0408 19:10:27.799345  180276 addons.go:247] addon storage-provisioner should already be in state true
	I0408 19:10:27.799380  180276 host.go:66] Checking if "test-preload-079033" exists ...
	I0408 19:10:27.799792  180276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:10:27.799842  180276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:10:27.799792  180276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:10:27.799989  180276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:10:27.801725  180276 out.go:177] * Verifying Kubernetes components...
	I0408 19:10:27.803568  180276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:10:27.818688  180276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44187
	I0408 19:10:27.818767  180276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40069
	I0408 19:10:27.819291  180276 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:10:27.819492  180276 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:10:27.820087  180276 main.go:141] libmachine: Using API Version  1
	I0408 19:10:27.820104  180276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:10:27.820255  180276 main.go:141] libmachine: Using API Version  1
	I0408 19:10:27.820278  180276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:10:27.820535  180276 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:10:27.820693  180276 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:10:27.820890  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetState
	I0408 19:10:27.821395  180276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:10:27.821451  180276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:10:27.824344  180276 kapi.go:59] client config for test-preload-079033: &rest.Config{Host:"https://192.168.39.253:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/client.crt", KeyFile:"/home/jenkins/minikube-integration/20604-141129/.minikube/profiles/test-preload-079033/client.key", CAFile:"/home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 19:10:27.824828  180276 addons.go:238] Setting addon default-storageclass=true in "test-preload-079033"
	W0408 19:10:27.824861  180276 addons.go:247] addon default-storageclass should already be in state true
	I0408 19:10:27.824905  180276 host.go:66] Checking if "test-preload-079033" exists ...
	I0408 19:10:27.825365  180276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:10:27.825429  180276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:10:27.839649  180276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41911
	I0408 19:10:27.840231  180276 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:10:27.840735  180276 main.go:141] libmachine: Using API Version  1
	I0408 19:10:27.840756  180276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:10:27.841338  180276 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:10:27.841631  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetState
	I0408 19:10:27.844252  180276 main.go:141] libmachine: (test-preload-079033) Calling .DriverName
	I0408 19:10:27.845264  180276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44227
	I0408 19:10:27.845889  180276 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:10:27.846517  180276 main.go:141] libmachine: Using API Version  1
	I0408 19:10:27.846555  180276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:10:27.847125  180276 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:10:27.847256  180276 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:10:27.847656  180276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:10:27.847711  180276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:10:27.849339  180276 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:10:27.849369  180276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 19:10:27.849399  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHHostname
	I0408 19:10:27.854707  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:27.855663  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:27.855703  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:27.856185  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHPort
	I0408 19:10:27.856556  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:27.856826  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHUsername
	I0408 19:10:27.857097  180276 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/test-preload-079033/id_rsa Username:docker}
	I0408 19:10:27.886492  180276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35147
	I0408 19:10:27.886915  180276 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:10:27.887334  180276 main.go:141] libmachine: Using API Version  1
	I0408 19:10:27.887358  180276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:10:27.887770  180276 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:10:27.888018  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetState
	I0408 19:10:27.889757  180276 main.go:141] libmachine: (test-preload-079033) Calling .DriverName
	I0408 19:10:27.890013  180276 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 19:10:27.890050  180276 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 19:10:27.890077  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHHostname
	I0408 19:10:27.893575  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:27.894124  180276 main.go:141] libmachine: (test-preload-079033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f4:b6", ip: ""} in network mk-test-preload-079033: {Iface:virbr1 ExpiryTime:2025-04-08 20:09:53 +0000 UTC Type:0 Mac:52:54:00:8f:f4:b6 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:test-preload-079033 Clientid:01:52:54:00:8f:f4:b6}
	I0408 19:10:27.894157  180276 main.go:141] libmachine: (test-preload-079033) DBG | domain test-preload-079033 has defined IP address 192.168.39.253 and MAC address 52:54:00:8f:f4:b6 in network mk-test-preload-079033
	I0408 19:10:27.894339  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHPort
	I0408 19:10:27.894542  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHKeyPath
	I0408 19:10:27.894670  180276 main.go:141] libmachine: (test-preload-079033) Calling .GetSSHUsername
	I0408 19:10:27.894855  180276 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/test-preload-079033/id_rsa Username:docker}
	I0408 19:10:28.001471  180276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:10:28.022112  180276 node_ready.go:35] waiting up to 6m0s for node "test-preload-079033" to be "Ready" ...
	I0408 19:10:28.084547  180276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:10:28.131867  180276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 19:10:29.075298  180276 main.go:141] libmachine: Making call to close driver server
	I0408 19:10:29.075327  180276 main.go:141] libmachine: (test-preload-079033) Calling .Close
	I0408 19:10:29.075381  180276 main.go:141] libmachine: Making call to close driver server
	I0408 19:10:29.075407  180276 main.go:141] libmachine: (test-preload-079033) Calling .Close
	I0408 19:10:29.075625  180276 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:10:29.075640  180276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:10:29.075650  180276 main.go:141] libmachine: Making call to close driver server
	I0408 19:10:29.075650  180276 main.go:141] libmachine: (test-preload-079033) DBG | Closing plugin on server side
	I0408 19:10:29.075657  180276 main.go:141] libmachine: (test-preload-079033) Calling .Close
	I0408 19:10:29.075715  180276 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:10:29.075722  180276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:10:29.075732  180276 main.go:141] libmachine: (test-preload-079033) DBG | Closing plugin on server side
	I0408 19:10:29.075742  180276 main.go:141] libmachine: Making call to close driver server
	I0408 19:10:29.075749  180276 main.go:141] libmachine: (test-preload-079033) Calling .Close
	I0408 19:10:29.075930  180276 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:10:29.075949  180276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:10:29.075980  180276 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:10:29.075994  180276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:10:29.076006  180276 main.go:141] libmachine: (test-preload-079033) DBG | Closing plugin on server side
	I0408 19:10:29.083216  180276 main.go:141] libmachine: Making call to close driver server
	I0408 19:10:29.083237  180276 main.go:141] libmachine: (test-preload-079033) Calling .Close
	I0408 19:10:29.083555  180276 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:10:29.083578  180276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:10:29.083588  180276 main.go:141] libmachine: (test-preload-079033) DBG | Closing plugin on server side
	I0408 19:10:29.085919  180276 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0408 19:10:29.087473  180276 addons.go:514] duration metric: took 1.288483375s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0408 19:10:30.027833  180276 node_ready.go:53] node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:32.526864  180276 node_ready.go:53] node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:35.027570  180276 node_ready.go:53] node "test-preload-079033" has status "Ready":"False"
	I0408 19:10:35.526914  180276 node_ready.go:49] node "test-preload-079033" has status "Ready":"True"
	I0408 19:10:35.526968  180276 node_ready.go:38] duration metric: took 7.504814045s for node "test-preload-079033" to be "Ready" ...
	I0408 19:10:35.526979  180276 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 19:10:35.531154  180276 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-t78gc" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:35.539967  180276 pod_ready.go:93] pod "coredns-6d4b75cb6d-t78gc" in "kube-system" namespace has status "Ready":"True"
	I0408 19:10:35.540007  180276 pod_ready.go:82] duration metric: took 8.814505ms for pod "coredns-6d4b75cb6d-t78gc" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:35.540031  180276 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:37.546579  180276 pod_ready.go:103] pod "etcd-test-preload-079033" in "kube-system" namespace has status "Ready":"False"
	I0408 19:10:39.546932  180276 pod_ready.go:93] pod "etcd-test-preload-079033" in "kube-system" namespace has status "Ready":"True"
	I0408 19:10:39.546988  180276 pod_ready.go:82] duration metric: took 4.006949028s for pod "etcd-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:39.547001  180276 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:39.554117  180276 pod_ready.go:93] pod "kube-apiserver-test-preload-079033" in "kube-system" namespace has status "Ready":"True"
	I0408 19:10:39.554145  180276 pod_ready.go:82] duration metric: took 7.136901ms for pod "kube-apiserver-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:39.554155  180276 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:39.560029  180276 pod_ready.go:93] pod "kube-controller-manager-test-preload-079033" in "kube-system" namespace has status "Ready":"True"
	I0408 19:10:39.560062  180276 pod_ready.go:82] duration metric: took 5.899143ms for pod "kube-controller-manager-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:39.560077  180276 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8958v" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:39.566285  180276 pod_ready.go:93] pod "kube-proxy-8958v" in "kube-system" namespace has status "Ready":"True"
	I0408 19:10:39.566315  180276 pod_ready.go:82] duration metric: took 6.228846ms for pod "kube-proxy-8958v" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:39.566328  180276 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:39.571497  180276 pod_ready.go:93] pod "kube-scheduler-test-preload-079033" in "kube-system" namespace has status "Ready":"True"
	I0408 19:10:39.571523  180276 pod_ready.go:82] duration metric: took 5.188179ms for pod "kube-scheduler-test-preload-079033" in "kube-system" namespace to be "Ready" ...
	I0408 19:10:39.571535  180276 pod_ready.go:39] duration metric: took 4.044544508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 19:10:39.571552  180276 api_server.go:52] waiting for apiserver process to appear ...
	I0408 19:10:39.571615  180276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:10:39.591573  180276 api_server.go:72] duration metric: took 11.79266475s to wait for apiserver process to appear ...
	I0408 19:10:39.591609  180276 api_server.go:88] waiting for apiserver healthz status ...
	I0408 19:10:39.591631  180276 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0408 19:10:39.598028  180276 api_server.go:279] https://192.168.39.253:8443/healthz returned 200:
	ok
	I0408 19:10:39.599358  180276 api_server.go:141] control plane version: v1.24.4
	I0408 19:10:39.599386  180276 api_server.go:131] duration metric: took 7.769472ms to wait for apiserver health ...
	I0408 19:10:39.599396  180276 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 19:10:39.745886  180276 system_pods.go:59] 7 kube-system pods found
	I0408 19:10:39.745927  180276 system_pods.go:61] "coredns-6d4b75cb6d-t78gc" [2be47ad1-05bd-40cf-885a-e925082664b7] Running
	I0408 19:10:39.745935  180276 system_pods.go:61] "etcd-test-preload-079033" [a65ad59d-ea1b-449a-af0f-2cf49f628d9f] Running
	I0408 19:10:39.745940  180276 system_pods.go:61] "kube-apiserver-test-preload-079033" [358ef9db-d615-4d0a-99a2-72ea069b4f67] Running
	I0408 19:10:39.745945  180276 system_pods.go:61] "kube-controller-manager-test-preload-079033" [e84c887a-80ff-4fa6-b811-f358b77af68a] Running
	I0408 19:10:39.745949  180276 system_pods.go:61] "kube-proxy-8958v" [c2304682-1e91-43f5-aaa8-9b71c85e3cb3] Running
	I0408 19:10:39.745954  180276 system_pods.go:61] "kube-scheduler-test-preload-079033" [503ada9a-6197-4f29-b7ca-6f150dd212c6] Running
	I0408 19:10:39.745959  180276 system_pods.go:61] "storage-provisioner" [91503bfb-3738-4dda-baf4-fa05ef756650] Running
	I0408 19:10:39.745968  180276 system_pods.go:74] duration metric: took 146.56368ms to wait for pod list to return data ...
	I0408 19:10:39.745979  180276 default_sa.go:34] waiting for default service account to be created ...
	I0408 19:10:39.944306  180276 default_sa.go:45] found service account: "default"
	I0408 19:10:39.944339  180276 default_sa.go:55] duration metric: took 198.347719ms for default service account to be created ...
	I0408 19:10:39.944353  180276 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 19:10:40.146358  180276 system_pods.go:86] 7 kube-system pods found
	I0408 19:10:40.146404  180276 system_pods.go:89] "coredns-6d4b75cb6d-t78gc" [2be47ad1-05bd-40cf-885a-e925082664b7] Running
	I0408 19:10:40.146421  180276 system_pods.go:89] "etcd-test-preload-079033" [a65ad59d-ea1b-449a-af0f-2cf49f628d9f] Running
	I0408 19:10:40.146428  180276 system_pods.go:89] "kube-apiserver-test-preload-079033" [358ef9db-d615-4d0a-99a2-72ea069b4f67] Running
	I0408 19:10:40.146433  180276 system_pods.go:89] "kube-controller-manager-test-preload-079033" [e84c887a-80ff-4fa6-b811-f358b77af68a] Running
	I0408 19:10:40.146438  180276 system_pods.go:89] "kube-proxy-8958v" [c2304682-1e91-43f5-aaa8-9b71c85e3cb3] Running
	I0408 19:10:40.146443  180276 system_pods.go:89] "kube-scheduler-test-preload-079033" [503ada9a-6197-4f29-b7ca-6f150dd212c6] Running
	I0408 19:10:40.146452  180276 system_pods.go:89] "storage-provisioner" [91503bfb-3738-4dda-baf4-fa05ef756650] Running
	I0408 19:10:40.146464  180276 system_pods.go:126] duration metric: took 202.103709ms to wait for k8s-apps to be running ...
	I0408 19:10:40.146486  180276 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 19:10:40.146547  180276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 19:10:40.168171  180276 system_svc.go:56] duration metric: took 21.687052ms WaitForService to wait for kubelet
	I0408 19:10:40.168214  180276 kubeadm.go:582] duration metric: took 12.369316847s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 19:10:40.168243  180276 node_conditions.go:102] verifying NodePressure condition ...
	I0408 19:10:40.345186  180276 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 19:10:40.345216  180276 node_conditions.go:123] node cpu capacity is 2
	I0408 19:10:40.345230  180276 node_conditions.go:105] duration metric: took 176.971292ms to run NodePressure ...
	I0408 19:10:40.345242  180276 start.go:241] waiting for startup goroutines ...
	I0408 19:10:40.345249  180276 start.go:246] waiting for cluster config update ...
	I0408 19:10:40.345266  180276 start.go:255] writing updated cluster config ...
	I0408 19:10:40.345578  180276 ssh_runner.go:195] Run: rm -f paused
	I0408 19:10:40.404947  180276 start.go:600] kubectl: 1.32.3, cluster: 1.24.4 (minor skew: 8)
	I0408 19:10:40.407644  180276 out.go:201] 
	W0408 19:10:40.409370  180276 out.go:270] ! /usr/local/bin/kubectl is version 1.32.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0408 19:10:40.410803  180276 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0408 19:10:40.412435  180276 out.go:177] * Done! kubectl is now configured to use "test-preload-079033" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.399112122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744139441399079908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6c48d11-5e64-4e98-a5df-c76880f85072 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.399738082Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45d9eafd-8673-4dcd-8078-6b90ad9bc2a0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.399863152Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45d9eafd-8673-4dcd-8078-6b90ad9bc2a0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.400056817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4c9db5d9fdb681315605f6f3452fc804ec29e7a7839946ed3014c482da90944,PodSandboxId:0d30b21ebbf288c2ff14786b1a94971e2a03ce64c7e10c8989c0678ae760035f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744139433169091008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-t78gc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be47ad1-05bd-40cf-885a-e925082664b7,},Annotations:map[string]string{io.kubernetes.container.hash: 307ab03f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f9a175b45cf1d2c4bdc006d61c1a1efde017e2fdc72ca28c28fb457c9863da2,PodSandboxId:e87d204879aeac39fa652de7c578c2c87d7811f834d61c80be548f3f8919f296,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744139426027178404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8958v,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c2304682-1e91-43f5-aaa8-9b71c85e3cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 9b940a57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad0c0dd4a28cb36a0fc3f4d15397ad55bac8128d9a5d133b94b76f14b9086b31,PodSandboxId:579401cf260f1ed2c73cd8db4fdbe7bf8735a981d58f98d4a1968e20b4fd4d0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744139425703888611,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91
503bfb-3738-4dda-baf4-fa05ef756650,},Annotations:map[string]string{io.kubernetes.container.hash: 806b8938,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:148843b7f6c2636d481dd6821df2ec29679393869c936fac2a03d299fb0294de,PodSandboxId:4838661eb910e947f78cbfc9dc9cf76ceed65073d97c01857bb8635c50ac3c41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744139420756578182,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f543130dc
357ea9c4f211fe3aa2dabd4,},Annotations:map[string]string{io.kubernetes.container.hash: 13f369a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d99c7567660f8ed8524a6e7f2d82b570a4b916c47a9482c7c987a77b84946c,PodSandboxId:0b0ea8f86f459c06a99f4b323197f5f000f963e42c5c8e4f64c99bf0a45502bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744139420732736721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78ee92074cacf7653b8d
1c16a6de3467,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5b96cdf46db74813bb40a2ee0f3b3e1922d14bbb640f8930d2575670695afcf,PodSandboxId:0057df14ad4a8b3577f0bdac129ee8edbe6d8049ca9ed77087296250aa903615,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744139420736331946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0327f6408a870df54033bacb212f23c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac8bda6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5204f2ea01899bcfd46e0f928f51f1ce30ac34c59cdf674783b104532ec059d3,PodSandboxId:e4eec247aa09da40e18fb01a4bd8b77a477f623960fa924d70f99c0cb11f4bc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744139420680002845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c13c237cd9eba661e5cad1484e4f2af5,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=45d9eafd-8673-4dcd-8078-6b90ad9bc2a0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.439661896Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=754630e5-f805-4620-abf6-771e1f0d22b3 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.439805585Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=754630e5-f805-4620-abf6-771e1f0d22b3 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.441331715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36be8f0b-cf67-4858-9f5e-895e195993d8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.441956259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744139441441926058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36be8f0b-cf67-4858-9f5e-895e195993d8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.442548110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4875b0f8-7cbe-42f5-9212-4b9f42a9bbd7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.442628852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4875b0f8-7cbe-42f5-9212-4b9f42a9bbd7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.442886436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4c9db5d9fdb681315605f6f3452fc804ec29e7a7839946ed3014c482da90944,PodSandboxId:0d30b21ebbf288c2ff14786b1a94971e2a03ce64c7e10c8989c0678ae760035f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744139433169091008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-t78gc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be47ad1-05bd-40cf-885a-e925082664b7,},Annotations:map[string]string{io.kubernetes.container.hash: 307ab03f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f9a175b45cf1d2c4bdc006d61c1a1efde017e2fdc72ca28c28fb457c9863da2,PodSandboxId:e87d204879aeac39fa652de7c578c2c87d7811f834d61c80be548f3f8919f296,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744139426027178404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8958v,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c2304682-1e91-43f5-aaa8-9b71c85e3cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 9b940a57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad0c0dd4a28cb36a0fc3f4d15397ad55bac8128d9a5d133b94b76f14b9086b31,PodSandboxId:579401cf260f1ed2c73cd8db4fdbe7bf8735a981d58f98d4a1968e20b4fd4d0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744139425703888611,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91
503bfb-3738-4dda-baf4-fa05ef756650,},Annotations:map[string]string{io.kubernetes.container.hash: 806b8938,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:148843b7f6c2636d481dd6821df2ec29679393869c936fac2a03d299fb0294de,PodSandboxId:4838661eb910e947f78cbfc9dc9cf76ceed65073d97c01857bb8635c50ac3c41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744139420756578182,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f543130dc
357ea9c4f211fe3aa2dabd4,},Annotations:map[string]string{io.kubernetes.container.hash: 13f369a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d99c7567660f8ed8524a6e7f2d82b570a4b916c47a9482c7c987a77b84946c,PodSandboxId:0b0ea8f86f459c06a99f4b323197f5f000f963e42c5c8e4f64c99bf0a45502bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744139420732736721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78ee92074cacf7653b8d
1c16a6de3467,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5b96cdf46db74813bb40a2ee0f3b3e1922d14bbb640f8930d2575670695afcf,PodSandboxId:0057df14ad4a8b3577f0bdac129ee8edbe6d8049ca9ed77087296250aa903615,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744139420736331946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0327f6408a870df54033bacb212f23c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac8bda6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5204f2ea01899bcfd46e0f928f51f1ce30ac34c59cdf674783b104532ec059d3,PodSandboxId:e4eec247aa09da40e18fb01a4bd8b77a477f623960fa924d70f99c0cb11f4bc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744139420680002845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c13c237cd9eba661e5cad1484e4f2af5,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4875b0f8-7cbe-42f5-9212-4b9f42a9bbd7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.484308360Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a7ca295-6fde-453e-8b5c-417a559301d2 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.484404877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a7ca295-6fde-453e-8b5c-417a559301d2 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.485990890Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc4ae465-686d-4c52-915a-54d870c02745 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.486499092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744139441486470907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc4ae465-686d-4c52-915a-54d870c02745 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.487294741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e0ce570-7065-4170-876b-7ad986d5df06 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.487355695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e0ce570-7065-4170-876b-7ad986d5df06 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.487516897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4c9db5d9fdb681315605f6f3452fc804ec29e7a7839946ed3014c482da90944,PodSandboxId:0d30b21ebbf288c2ff14786b1a94971e2a03ce64c7e10c8989c0678ae760035f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744139433169091008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-t78gc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be47ad1-05bd-40cf-885a-e925082664b7,},Annotations:map[string]string{io.kubernetes.container.hash: 307ab03f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f9a175b45cf1d2c4bdc006d61c1a1efde017e2fdc72ca28c28fb457c9863da2,PodSandboxId:e87d204879aeac39fa652de7c578c2c87d7811f834d61c80be548f3f8919f296,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744139426027178404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8958v,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c2304682-1e91-43f5-aaa8-9b71c85e3cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 9b940a57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad0c0dd4a28cb36a0fc3f4d15397ad55bac8128d9a5d133b94b76f14b9086b31,PodSandboxId:579401cf260f1ed2c73cd8db4fdbe7bf8735a981d58f98d4a1968e20b4fd4d0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744139425703888611,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91
503bfb-3738-4dda-baf4-fa05ef756650,},Annotations:map[string]string{io.kubernetes.container.hash: 806b8938,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:148843b7f6c2636d481dd6821df2ec29679393869c936fac2a03d299fb0294de,PodSandboxId:4838661eb910e947f78cbfc9dc9cf76ceed65073d97c01857bb8635c50ac3c41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744139420756578182,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f543130dc
357ea9c4f211fe3aa2dabd4,},Annotations:map[string]string{io.kubernetes.container.hash: 13f369a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d99c7567660f8ed8524a6e7f2d82b570a4b916c47a9482c7c987a77b84946c,PodSandboxId:0b0ea8f86f459c06a99f4b323197f5f000f963e42c5c8e4f64c99bf0a45502bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744139420732736721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78ee92074cacf7653b8d
1c16a6de3467,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5b96cdf46db74813bb40a2ee0f3b3e1922d14bbb640f8930d2575670695afcf,PodSandboxId:0057df14ad4a8b3577f0bdac129ee8edbe6d8049ca9ed77087296250aa903615,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744139420736331946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0327f6408a870df54033bacb212f23c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac8bda6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5204f2ea01899bcfd46e0f928f51f1ce30ac34c59cdf674783b104532ec059d3,PodSandboxId:e4eec247aa09da40e18fb01a4bd8b77a477f623960fa924d70f99c0cb11f4bc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744139420680002845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c13c237cd9eba661e5cad1484e4f2af5,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e0ce570-7065-4170-876b-7ad986d5df06 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.526734564Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01de6208-9c1d-451c-aac4-d507da6f05c8 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.526912604Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01de6208-9c1d-451c-aac4-d507da6f05c8 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.527913379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a71bcdd-44db-4ea9-97fb-fab4954d9352 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.528599691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744139441528570864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a71bcdd-44db-4ea9-97fb-fab4954d9352 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.529124171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e438cf0-3477-45d4-b1b5-a9d65592f24f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.529178513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e438cf0-3477-45d4-b1b5-a9d65592f24f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:10:41 test-preload-079033 crio[674]: time="2025-04-08 19:10:41.529591861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4c9db5d9fdb681315605f6f3452fc804ec29e7a7839946ed3014c482da90944,PodSandboxId:0d30b21ebbf288c2ff14786b1a94971e2a03ce64c7e10c8989c0678ae760035f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744139433169091008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-t78gc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be47ad1-05bd-40cf-885a-e925082664b7,},Annotations:map[string]string{io.kubernetes.container.hash: 307ab03f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f9a175b45cf1d2c4bdc006d61c1a1efde017e2fdc72ca28c28fb457c9863da2,PodSandboxId:e87d204879aeac39fa652de7c578c2c87d7811f834d61c80be548f3f8919f296,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744139426027178404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8958v,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c2304682-1e91-43f5-aaa8-9b71c85e3cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 9b940a57,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad0c0dd4a28cb36a0fc3f4d15397ad55bac8128d9a5d133b94b76f14b9086b31,PodSandboxId:579401cf260f1ed2c73cd8db4fdbe7bf8735a981d58f98d4a1968e20b4fd4d0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744139425703888611,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91
503bfb-3738-4dda-baf4-fa05ef756650,},Annotations:map[string]string{io.kubernetes.container.hash: 806b8938,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:148843b7f6c2636d481dd6821df2ec29679393869c936fac2a03d299fb0294de,PodSandboxId:4838661eb910e947f78cbfc9dc9cf76ceed65073d97c01857bb8635c50ac3c41,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744139420756578182,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f543130dc
357ea9c4f211fe3aa2dabd4,},Annotations:map[string]string{io.kubernetes.container.hash: 13f369a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d99c7567660f8ed8524a6e7f2d82b570a4b916c47a9482c7c987a77b84946c,PodSandboxId:0b0ea8f86f459c06a99f4b323197f5f000f963e42c5c8e4f64c99bf0a45502bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744139420732736721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78ee92074cacf7653b8d
1c16a6de3467,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5b96cdf46db74813bb40a2ee0f3b3e1922d14bbb640f8930d2575670695afcf,PodSandboxId:0057df14ad4a8b3577f0bdac129ee8edbe6d8049ca9ed77087296250aa903615,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744139420736331946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0327f6408a870df54033bacb212f23c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac8bda6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5204f2ea01899bcfd46e0f928f51f1ce30ac34c59cdf674783b104532ec059d3,PodSandboxId:e4eec247aa09da40e18fb01a4bd8b77a477f623960fa924d70f99c0cb11f4bc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744139420680002845,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-079033,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c13c237cd9eba661e5cad1484e4f2af5,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e438cf0-3477-45d4-b1b5-a9d65592f24f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c4c9db5d9fdb6       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   0d30b21ebbf28       coredns-6d4b75cb6d-t78gc
	1f9a175b45cf1       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   e87d204879aea       kube-proxy-8958v
	ad0c0dd4a28cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   579401cf260f1       storage-provisioner
	148843b7f6c26       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   4838661eb910e       kube-apiserver-test-preload-079033
	b5b96cdf46db7       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   0057df14ad4a8       etcd-test-preload-079033
	72d99c7567660       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   0b0ea8f86f459       kube-scheduler-test-preload-079033
	5204f2ea01899       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   e4eec247aa09d       kube-controller-manager-test-preload-079033
	
	
	==> coredns [c4c9db5d9fdb681315605f6f3452fc804ec29e7a7839946ed3014c482da90944] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:37816 - 25587 "HINFO IN 272278594487447821.821316588511115235. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.02067502s
	
	
	==> describe nodes <==
	Name:               test-preload-079033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-079033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=00fec7ad00298ce3ccd71a2d57a7f829f082fec8
	                    minikube.k8s.io/name=test-preload-079033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_08T19_08_33_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Apr 2025 19:08:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-079033
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Apr 2025 19:10:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Apr 2025 19:10:35 +0000   Tue, 08 Apr 2025 19:08:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Apr 2025 19:10:35 +0000   Tue, 08 Apr 2025 19:08:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Apr 2025 19:10:35 +0000   Tue, 08 Apr 2025 19:08:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Apr 2025 19:10:35 +0000   Tue, 08 Apr 2025 19:10:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    test-preload-079033
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ad5687bf5264cccb4ca302cffef5c83
	  System UUID:                4ad5687b-f526-4ccc-b4ca-302cffef5c83
	  Boot ID:                    6764f4ae-7e9c-49ab-b779-a11ed81e7018
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-t78gc                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     116s
	  kube-system                 etcd-test-preload-079033                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m9s
	  kube-system                 kube-apiserver-test-preload-079033             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-test-preload-079033    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-proxy-8958v                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-test-preload-079033             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15s                    kube-proxy       
	  Normal  Starting                 114s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m17s (x5 over 2m17s)  kubelet          Node test-preload-079033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s (x4 over 2m17s)  kubelet          Node test-preload-079033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s (x4 over 2m17s)  kubelet          Node test-preload-079033 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m8s                   kubelet          Node test-preload-079033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s                   kubelet          Node test-preload-079033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s                   kubelet          Node test-preload-079033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                118s                   kubelet          Node test-preload-079033 status is now: NodeReady
	  Normal  RegisteredNode           117s                   node-controller  Node test-preload-079033 event: Registered Node test-preload-079033 in Controller
	  Normal  Starting                 21s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)      kubelet          Node test-preload-079033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)      kubelet          Node test-preload-079033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)      kubelet          Node test-preload-079033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                     node-controller  Node test-preload-079033 event: Registered Node test-preload-079033 in Controller
	
	
	==> dmesg <==
	[Apr 8 19:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049204] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036866] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.990421] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.214766] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.577734] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 8 19:10] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.060311] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068270] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.192752] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.133824] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.316624] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[ +14.241037] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.061656] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.923827] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +4.212837] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.871150] systemd-fstab-generator[1789]: Ignoring "noauto" option for root device
	[  +5.064458] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.165035] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [b5b96cdf46db74813bb40a2ee0f3b3e1922d14bbb640f8930d2575670695afcf] <==
	{"level":"info","ts":"2025-04-08T19:10:21.131Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"3773e8bb706c8f02","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-08T19:10:21.131Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-08T19:10:21.133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3773e8bb706c8f02 switched to configuration voters=(3995793186150452994)"}
	{"level":"info","ts":"2025-04-08T19:10:21.138Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4606fbf8165cac5a","local-member-id":"3773e8bb706c8f02","added-peer-id":"3773e8bb706c8f02","added-peer-peer-urls":["https://192.168.39.253:2380"]}
	{"level":"info","ts":"2025-04-08T19:10:21.138Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4606fbf8165cac5a","local-member-id":"3773e8bb706c8f02","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-08T19:10:21.138Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-08T19:10:21.156Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-08T19:10:21.162Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3773e8bb706c8f02","initial-advertise-peer-urls":["https://192.168.39.253:2380"],"listen-peer-urls":["https://192.168.39.253:2380"],"advertise-client-urls":["https://192.168.39.253:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.253:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-08T19:10:21.159Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.253:2380"}
	{"level":"info","ts":"2025-04-08T19:10:21.162Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-08T19:10:21.162Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.253:2380"}
	{"level":"info","ts":"2025-04-08T19:10:22.355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3773e8bb706c8f02 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-08T19:10:22.355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3773e8bb706c8f02 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-08T19:10:22.355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3773e8bb706c8f02 received MsgPreVoteResp from 3773e8bb706c8f02 at term 2"}
	{"level":"info","ts":"2025-04-08T19:10:22.355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3773e8bb706c8f02 became candidate at term 3"}
	{"level":"info","ts":"2025-04-08T19:10:22.355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3773e8bb706c8f02 received MsgVoteResp from 3773e8bb706c8f02 at term 3"}
	{"level":"info","ts":"2025-04-08T19:10:22.355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3773e8bb706c8f02 became leader at term 3"}
	{"level":"info","ts":"2025-04-08T19:10:22.355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3773e8bb706c8f02 elected leader 3773e8bb706c8f02 at term 3"}
	{"level":"info","ts":"2025-04-08T19:10:22.355Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"3773e8bb706c8f02","local-member-attributes":"{Name:test-preload-079033 ClientURLs:[https://192.168.39.253:2379]}","request-path":"/0/members/3773e8bb706c8f02/attributes","cluster-id":"4606fbf8165cac5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-08T19:10:22.355Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-08T19:10:22.357Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-08T19:10:22.358Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-08T19:10:22.359Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.253:2379"}
	{"level":"info","ts":"2025-04-08T19:10:22.362Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-08T19:10:22.362Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:10:41 up 0 min,  0 users,  load average: 0.76, 0.24, 0.08
	Linux test-preload-079033 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [148843b7f6c2636d481dd6821df2ec29679393869c936fac2a03d299fb0294de] <==
	I0408 19:10:24.807598       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0408 19:10:24.868019       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0408 19:10:24.807807       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	I0408 19:10:24.808201       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0408 19:10:24.808215       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0408 19:10:24.820894       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0408 19:10:24.873743       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0408 19:10:24.935269       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0408 19:10:24.951129       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0408 19:10:24.951558       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0408 19:10:24.952951       1 cache.go:39] Caches are synced for autoregister controller
	I0408 19:10:24.968207       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0408 19:10:24.975289       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0408 19:10:25.018196       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 19:10:25.022030       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0408 19:10:25.502077       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0408 19:10:25.818569       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0408 19:10:26.374329       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0408 19:10:26.696999       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0408 19:10:26.710969       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0408 19:10:26.763054       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0408 19:10:26.791817       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 19:10:26.801590       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0408 19:10:38.007683       1 controller.go:611] quota admission added evaluator for: endpoints
	I0408 19:10:38.048911       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5204f2ea01899bcfd46e0f928f51f1ce30ac34c59cdf674783b104532ec059d3] <==
	I0408 19:10:38.007365       1 range_allocator.go:173] Starting range CIDR allocator
	I0408 19:10:38.007406       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0408 19:10:38.007433       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0408 19:10:38.011467       1 shared_informer.go:262] Caches are synced for taint
	I0408 19:10:38.011573       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0408 19:10:38.011670       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0408 19:10:38.011681       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-079033. Assuming now as a timestamp.
	I0408 19:10:38.011835       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0408 19:10:38.012093       1 event.go:294] "Event occurred" object="test-preload-079033" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-079033 event: Registered Node test-preload-079033 in Controller"
	I0408 19:10:38.013142       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0408 19:10:38.032609       1 shared_informer.go:262] Caches are synced for crt configmap
	I0408 19:10:38.034002       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0408 19:10:38.038943       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0408 19:10:38.056143       1 shared_informer.go:262] Caches are synced for job
	I0408 19:10:38.165948       1 shared_informer.go:262] Caches are synced for stateful set
	I0408 19:10:38.182922       1 shared_informer.go:262] Caches are synced for daemon sets
	I0408 19:10:38.185496       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0408 19:10:38.207013       1 shared_informer.go:262] Caches are synced for disruption
	I0408 19:10:38.207095       1 disruption.go:371] Sending events to api server.
	I0408 19:10:38.220280       1 shared_informer.go:262] Caches are synced for resource quota
	I0408 19:10:38.238334       1 shared_informer.go:262] Caches are synced for resource quota
	I0408 19:10:38.241890       1 shared_informer.go:262] Caches are synced for deployment
	I0408 19:10:38.677421       1 shared_informer.go:262] Caches are synced for garbage collector
	I0408 19:10:38.677541       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0408 19:10:38.678727       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [1f9a175b45cf1d2c4bdc006d61c1a1efde017e2fdc72ca28c28fb457c9863da2] <==
	I0408 19:10:26.324515       1 node.go:163] Successfully retrieved node IP: 192.168.39.253
	I0408 19:10:26.324869       1 server_others.go:138] "Detected node IP" address="192.168.39.253"
	I0408 19:10:26.324978       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0408 19:10:26.367037       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0408 19:10:26.367116       1 server_others.go:206] "Using iptables Proxier"
	I0408 19:10:26.367839       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0408 19:10:26.368508       1 server.go:661] "Version info" version="v1.24.4"
	I0408 19:10:26.368588       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 19:10:26.370178       1 config.go:317] "Starting service config controller"
	I0408 19:10:26.370471       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0408 19:10:26.370557       1 config.go:226] "Starting endpoint slice config controller"
	I0408 19:10:26.370587       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0408 19:10:26.371934       1 config.go:444] "Starting node config controller"
	I0408 19:10:26.372009       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0408 19:10:26.471060       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0408 19:10:26.471102       1 shared_informer.go:262] Caches are synced for service config
	I0408 19:10:26.472822       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [72d99c7567660f8ed8524a6e7f2d82b570a4b916c47a9482c7c987a77b84946c] <==
	I0408 19:10:21.606103       1 serving.go:348] Generated self-signed cert in-memory
	W0408 19:10:24.866892       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0408 19:10:24.866991       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 19:10:24.867019       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0408 19:10:24.867030       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0408 19:10:24.930870       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0408 19:10:24.930966       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 19:10:24.938948       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0408 19:10:24.939650       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0408 19:10:24.940507       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0408 19:10:24.939684       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0408 19:10:25.040579       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.000395    1130 apiserver.go:52] "Watching apiserver"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.005716    1130 topology_manager.go:200] "Topology Admit Handler"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.005866    1130 topology_manager.go:200] "Topology Admit Handler"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.005912    1130 topology_manager.go:200] "Topology Admit Handler"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: E0408 19:10:25.008905    1130 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-t78gc" podUID=2be47ad1-05bd-40cf-885a-e925082664b7
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: E0408 19:10:25.067362    1130 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.079129    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tks5b\" (UniqueName: \"kubernetes.io/projected/91503bfb-3738-4dda-baf4-fa05ef756650-kube-api-access-tks5b\") pod \"storage-provisioner\" (UID: \"91503bfb-3738-4dda-baf4-fa05ef756650\") " pod="kube-system/storage-provisioner"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.079215    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c2304682-1e91-43f5-aaa8-9b71c85e3cb3-kube-proxy\") pod \"kube-proxy-8958v\" (UID: \"c2304682-1e91-43f5-aaa8-9b71c85e3cb3\") " pod="kube-system/kube-proxy-8958v"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.079240    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6rvj\" (UniqueName: \"kubernetes.io/projected/c2304682-1e91-43f5-aaa8-9b71c85e3cb3-kube-api-access-n6rvj\") pod \"kube-proxy-8958v\" (UID: \"c2304682-1e91-43f5-aaa8-9b71c85e3cb3\") " pod="kube-system/kube-proxy-8958v"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.079259    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2304682-1e91-43f5-aaa8-9b71c85e3cb3-xtables-lock\") pod \"kube-proxy-8958v\" (UID: \"c2304682-1e91-43f5-aaa8-9b71c85e3cb3\") " pod="kube-system/kube-proxy-8958v"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.079277    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2304682-1e91-43f5-aaa8-9b71c85e3cb3-lib-modules\") pod \"kube-proxy-8958v\" (UID: \"c2304682-1e91-43f5-aaa8-9b71c85e3cb3\") " pod="kube-system/kube-proxy-8958v"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.079295    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2be47ad1-05bd-40cf-885a-e925082664b7-config-volume\") pod \"coredns-6d4b75cb6d-t78gc\" (UID: \"2be47ad1-05bd-40cf-885a-e925082664b7\") " pod="kube-system/coredns-6d4b75cb6d-t78gc"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.079315    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gspn\" (UniqueName: \"kubernetes.io/projected/2be47ad1-05bd-40cf-885a-e925082664b7-kube-api-access-8gspn\") pod \"coredns-6d4b75cb6d-t78gc\" (UID: \"2be47ad1-05bd-40cf-885a-e925082664b7\") " pod="kube-system/coredns-6d4b75cb6d-t78gc"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.079334    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/91503bfb-3738-4dda-baf4-fa05ef756650-tmp\") pod \"storage-provisioner\" (UID: \"91503bfb-3738-4dda-baf4-fa05ef756650\") " pod="kube-system/storage-provisioner"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: I0408 19:10:25.079344    1130 reconciler.go:159] "Reconciler: start to sync state"
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: E0408 19:10:25.184409    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: E0408 19:10:25.184552    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/2be47ad1-05bd-40cf-885a-e925082664b7-config-volume podName:2be47ad1-05bd-40cf-885a-e925082664b7 nodeName:}" failed. No retries permitted until 2025-04-08 19:10:25.684526759 +0000 UTC m=+5.816851784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2be47ad1-05bd-40cf-885a-e925082664b7-config-volume") pod "coredns-6d4b75cb6d-t78gc" (UID: "2be47ad1-05bd-40cf-885a-e925082664b7") : object "kube-system"/"coredns" not registered
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: E0408 19:10:25.688400    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 08 19:10:25 test-preload-079033 kubelet[1130]: E0408 19:10:25.688470    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/2be47ad1-05bd-40cf-885a-e925082664b7-config-volume podName:2be47ad1-05bd-40cf-885a-e925082664b7 nodeName:}" failed. No retries permitted until 2025-04-08 19:10:26.688455244 +0000 UTC m=+6.820780268 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2be47ad1-05bd-40cf-885a-e925082664b7-config-volume") pod "coredns-6d4b75cb6d-t78gc" (UID: "2be47ad1-05bd-40cf-885a-e925082664b7") : object "kube-system"/"coredns" not registered
	Apr 08 19:10:26 test-preload-079033 kubelet[1130]: E0408 19:10:26.696467    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 08 19:10:26 test-preload-079033 kubelet[1130]: E0408 19:10:26.696553    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/2be47ad1-05bd-40cf-885a-e925082664b7-config-volume podName:2be47ad1-05bd-40cf-885a-e925082664b7 nodeName:}" failed. No retries permitted until 2025-04-08 19:10:28.696537369 +0000 UTC m=+8.828862405 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2be47ad1-05bd-40cf-885a-e925082664b7-config-volume") pod "coredns-6d4b75cb6d-t78gc" (UID: "2be47ad1-05bd-40cf-885a-e925082664b7") : object "kube-system"/"coredns" not registered
	Apr 08 19:10:27 test-preload-079033 kubelet[1130]: E0408 19:10:27.109498    1130 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-t78gc" podUID=2be47ad1-05bd-40cf-885a-e925082664b7
	Apr 08 19:10:28 test-preload-079033 kubelet[1130]: E0408 19:10:28.712294    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 08 19:10:28 test-preload-079033 kubelet[1130]: E0408 19:10:28.712867    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/2be47ad1-05bd-40cf-885a-e925082664b7-config-volume podName:2be47ad1-05bd-40cf-885a-e925082664b7 nodeName:}" failed. No retries permitted until 2025-04-08 19:10:32.712839479 +0000 UTC m=+12.845164508 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2be47ad1-05bd-40cf-885a-e925082664b7-config-volume") pod "coredns-6d4b75cb6d-t78gc" (UID: "2be47ad1-05bd-40cf-885a-e925082664b7") : object "kube-system"/"coredns" not registered
	Apr 08 19:10:29 test-preload-079033 kubelet[1130]: E0408 19:10:29.112040    1130 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-t78gc" podUID=2be47ad1-05bd-40cf-885a-e925082664b7
	
	
	==> storage-provisioner [ad0c0dd4a28cb36a0fc3f4d15397ad55bac8128d9a5d133b94b76f14b9086b31] <==
	I0408 19:10:25.779335       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-079033 -n test-preload-079033
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-079033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-079033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-079033
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-079033: (1.242320148s)
--- FAIL: TestPreload (210.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (441.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-958400 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-958400 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m3.50599569s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-958400] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20604
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-958400" primary control-plane node in "kubernetes-upgrade-958400" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 19:12:44.879699  181838 out.go:345] Setting OutFile to fd 1 ...
	I0408 19:12:44.879839  181838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:12:44.879851  181838 out.go:358] Setting ErrFile to fd 2...
	I0408 19:12:44.879858  181838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:12:44.880158  181838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 19:12:44.881684  181838 out.go:352] Setting JSON to false
	I0408 19:12:44.882730  181838 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10510,"bootTime":1744129055,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 19:12:44.882843  181838 start.go:139] virtualization: kvm guest
	I0408 19:12:44.885409  181838 out.go:177] * [kubernetes-upgrade-958400] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 19:12:44.887535  181838 notify.go:220] Checking for updates...
	I0408 19:12:44.888629  181838 out.go:177]   - MINIKUBE_LOCATION=20604
	I0408 19:12:44.890865  181838 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:12:44.892444  181838 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:12:44.894193  181838 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:12:44.896167  181838 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 19:12:44.897704  181838 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:12:44.900195  181838 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 19:12:44.955198  181838 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 19:12:44.956718  181838 start.go:297] selected driver: kvm2
	I0408 19:12:44.956756  181838 start.go:901] validating driver "kvm2" against <nil>
	I0408 19:12:44.956781  181838 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:12:44.957858  181838 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:12:44.972540  181838 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20604-141129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 19:12:44.996507  181838 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 19:12:44.996582  181838 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 19:12:44.996893  181838 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 19:12:44.996950  181838 cni.go:84] Creating CNI manager for ""
	I0408 19:12:44.996997  181838 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:12:44.997011  181838 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 19:12:44.997080  181838 start.go:340] cluster config:
	{Name:kubernetes-upgrade-958400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-958400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:12:44.997279  181838 iso.go:125] acquiring lock: {Name:mk6f89956dcd0ccd06b3c273592988c0e077c69a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:12:44.999434  181838 out.go:177] * Starting "kubernetes-upgrade-958400" primary control-plane node in "kubernetes-upgrade-958400" cluster
	I0408 19:12:45.000945  181838 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 19:12:45.001000  181838 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 19:12:45.001016  181838 cache.go:56] Caching tarball of preloaded images
	I0408 19:12:45.001105  181838 preload.go:172] Found /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 19:12:45.001120  181838 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 19:12:45.001422  181838 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/config.json ...
	I0408 19:12:45.001451  181838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/config.json: {Name:mk3817a8c17d9058b6c0337b79384342dcad0258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:12:45.001642  181838 start.go:360] acquireMachinesLock for kubernetes-upgrade-958400: {Name:mk9f7a747fe5c51efa93431b771c455683360918 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 19:13:12.163467  181838 start.go:364] duration metric: took 27.161772494s to acquireMachinesLock for "kubernetes-upgrade-958400"
	I0408 19:13:12.163558  181838 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-958400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-958400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 19:13:12.163665  181838 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 19:13:12.166142  181838 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 19:13:12.166415  181838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:13:12.166475  181838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:13:12.185360  181838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0408 19:13:12.186047  181838 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:13:12.186677  181838 main.go:141] libmachine: Using API Version  1
	I0408 19:13:12.186707  181838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:13:12.187247  181838 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:13:12.187498  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetMachineName
	I0408 19:13:12.187737  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:13:12.187995  181838 start.go:159] libmachine.API.Create for "kubernetes-upgrade-958400" (driver="kvm2")
	I0408 19:13:12.188043  181838 client.go:168] LocalClient.Create starting
	I0408 19:13:12.188087  181838 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem
	I0408 19:13:12.188137  181838 main.go:141] libmachine: Decoding PEM data...
	I0408 19:13:12.188164  181838 main.go:141] libmachine: Parsing certificate...
	I0408 19:13:12.188264  181838 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem
	I0408 19:13:12.188293  181838 main.go:141] libmachine: Decoding PEM data...
	I0408 19:13:12.188311  181838 main.go:141] libmachine: Parsing certificate...
	I0408 19:13:12.188337  181838 main.go:141] libmachine: Running pre-create checks...
	I0408 19:13:12.188352  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .PreCreateCheck
	I0408 19:13:12.188896  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetConfigRaw
	I0408 19:13:12.189432  181838 main.go:141] libmachine: Creating machine...
	I0408 19:13:12.189449  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .Create
	I0408 19:13:12.189636  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) creating KVM machine...
	I0408 19:13:12.189664  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) creating network...
	I0408 19:13:12.191745  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found existing default KVM network
	I0408 19:13:12.192673  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:12.192370  184236 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9d:61:9d} reservation:<nil>}
	I0408 19:13:12.193410  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:12.193292  184236 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123cf0}
	I0408 19:13:12.193465  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | created network xml: 
	I0408 19:13:12.193480  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | <network>
	I0408 19:13:12.193516  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG |   <name>mk-kubernetes-upgrade-958400</name>
	I0408 19:13:12.193529  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG |   <dns enable='no'/>
	I0408 19:13:12.193538  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG |   
	I0408 19:13:12.193551  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0408 19:13:12.193566  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG |     <dhcp>
	I0408 19:13:12.193578  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0408 19:13:12.193588  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG |     </dhcp>
	I0408 19:13:12.193594  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG |   </ip>
	I0408 19:13:12.193601  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG |   
	I0408 19:13:12.193611  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | </network>
	I0408 19:13:12.193621  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | 
	I0408 19:13:12.200429  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | trying to create private KVM network mk-kubernetes-upgrade-958400 192.168.50.0/24...
	I0408 19:13:12.296381  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | private KVM network mk-kubernetes-upgrade-958400 192.168.50.0/24 created
	I0408 19:13:12.296417  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) setting up store path in /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400 ...
	I0408 19:13:12.296434  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:12.296338  184236 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:13:12.296453  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) building disk image from file:///home/jenkins/minikube-integration/20604-141129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0408 19:13:12.296485  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Downloading /home/jenkins/minikube-integration/20604-141129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20604-141129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0408 19:13:12.604989  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:12.604735  184236 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/id_rsa...
	I0408 19:13:12.741475  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:12.741305  184236 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/kubernetes-upgrade-958400.rawdisk...
	I0408 19:13:12.741519  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | Writing magic tar header
	I0408 19:13:12.741547  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | Writing SSH key tar header
	I0408 19:13:12.741555  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:12.741425  184236 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400 ...
	I0408 19:13:12.741568  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400 (perms=drwx------)
	I0408 19:13:12.741592  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube/machines (perms=drwxr-xr-x)
	I0408 19:13:12.741617  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400
	I0408 19:13:12.741629  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube (perms=drwxr-xr-x)
	I0408 19:13:12.741666  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube/machines
	I0408 19:13:12.741708  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:13:12.741724  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) setting executable bit set on /home/jenkins/minikube-integration/20604-141129 (perms=drwxrwxr-x)
	I0408 19:13:12.741757  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 19:13:12.741769  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 19:13:12.741783  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) creating domain...
	I0408 19:13:12.741801  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129
	I0408 19:13:12.741815  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0408 19:13:12.741826  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | checking permissions on dir: /home/jenkins
	I0408 19:13:12.741851  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | checking permissions on dir: /home
	I0408 19:13:12.741869  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | skipping /home - not owner
	I0408 19:13:12.743139  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) define libvirt domain using xml: 
	I0408 19:13:12.743167  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) <domain type='kvm'>
	I0408 19:13:12.743183  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)   <name>kubernetes-upgrade-958400</name>
	I0408 19:13:12.743191  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)   <memory unit='MiB'>2200</memory>
	I0408 19:13:12.743199  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)   <vcpu>2</vcpu>
	I0408 19:13:12.743207  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)   <features>
	I0408 19:13:12.743215  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <acpi/>
	I0408 19:13:12.743237  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <apic/>
	I0408 19:13:12.743251  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <pae/>
	I0408 19:13:12.743259  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     
	I0408 19:13:12.743270  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)   </features>
	I0408 19:13:12.743309  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)   <cpu mode='host-passthrough'>
	I0408 19:13:12.743352  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)   
	I0408 19:13:12.743382  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)   </cpu>
	I0408 19:13:12.743393  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)   <os>
	I0408 19:13:12.743401  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <type>hvm</type>
	I0408 19:13:12.743410  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <boot dev='cdrom'/>
	I0408 19:13:12.743417  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <boot dev='hd'/>
	I0408 19:13:12.743427  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <bootmenu enable='no'/>
	I0408 19:13:12.743438  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)   </os>
	I0408 19:13:12.743446  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)   <devices>
	I0408 19:13:12.743456  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <disk type='file' device='cdrom'>
	I0408 19:13:12.743465  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <source file='/home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/boot2docker.iso'/>
	I0408 19:13:12.743481  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <target dev='hdc' bus='scsi'/>
	I0408 19:13:12.743487  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <readonly/>
	I0408 19:13:12.743497  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     </disk>
	I0408 19:13:12.743510  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <disk type='file' device='disk'>
	I0408 19:13:12.743529  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 19:13:12.743546  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <source file='/home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/kubernetes-upgrade-958400.rawdisk'/>
	I0408 19:13:12.743557  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <target dev='hda' bus='virtio'/>
	I0408 19:13:12.743576  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     </disk>
	I0408 19:13:12.743585  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <interface type='network'>
	I0408 19:13:12.743601  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <source network='mk-kubernetes-upgrade-958400'/>
	I0408 19:13:12.743616  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <model type='virtio'/>
	I0408 19:13:12.743639  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     </interface>
	I0408 19:13:12.743659  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <interface type='network'>
	I0408 19:13:12.743673  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <source network='default'/>
	I0408 19:13:12.743684  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <model type='virtio'/>
	I0408 19:13:12.743714  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     </interface>
	I0408 19:13:12.743733  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <serial type='pty'>
	I0408 19:13:12.743744  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <target port='0'/>
	I0408 19:13:12.743756  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     </serial>
	I0408 19:13:12.743765  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <console type='pty'>
	I0408 19:13:12.743777  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <target type='serial' port='0'/>
	I0408 19:13:12.743788  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     </console>
	I0408 19:13:12.743796  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     <rng model='virtio'>
	I0408 19:13:12.743811  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)       <backend model='random'>/dev/random</backend>
	I0408 19:13:12.743824  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     </rng>
	I0408 19:13:12.743832  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     
	I0408 19:13:12.743837  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)     
	I0408 19:13:12.743857  181838 main.go:141] libmachine: (kubernetes-upgrade-958400)   </devices>
	I0408 19:13:12.743869  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) </domain>
	I0408 19:13:12.743880  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) 
	I0408 19:13:12.748603  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:01:02:66 in network default
	I0408 19:13:12.749367  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) starting domain...
	I0408 19:13:12.749391  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) ensuring networks are active...
	I0408 19:13:12.749405  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:12.750262  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Ensuring network default is active
	I0408 19:13:12.750571  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Ensuring network mk-kubernetes-upgrade-958400 is active
	I0408 19:13:12.751139  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) getting domain XML...
	I0408 19:13:12.751920  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) creating domain...
	I0408 19:13:14.220415  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) waiting for IP...
	I0408 19:13:14.221788  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:14.222479  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:14.222642  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:14.222571  184236 retry.go:31] will retry after 240.662465ms: waiting for domain to come up
	I0408 19:13:14.465528  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:14.466588  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:14.466620  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:14.466559  184236 retry.go:31] will retry after 298.175743ms: waiting for domain to come up
	I0408 19:13:14.766289  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:14.767000  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:14.767074  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:14.766980  184236 retry.go:31] will retry after 432.163325ms: waiting for domain to come up
	I0408 19:13:15.200725  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:15.201325  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:15.201462  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:15.201330  184236 retry.go:31] will retry after 396.991293ms: waiting for domain to come up
	I0408 19:13:15.600320  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:15.601016  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:15.601290  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:15.601037  184236 retry.go:31] will retry after 686.056857ms: waiting for domain to come up
	I0408 19:13:16.289415  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:16.290159  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:16.290235  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:16.290139  184236 retry.go:31] will retry after 935.569996ms: waiting for domain to come up
	I0408 19:13:17.228024  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:17.228568  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:17.228600  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:17.228527  184236 retry.go:31] will retry after 922.065942ms: waiting for domain to come up
	I0408 19:13:18.152302  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:18.152942  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:18.152975  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:18.152895  184236 retry.go:31] will retry after 1.007764379s: waiting for domain to come up
	I0408 19:13:19.162102  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:19.162798  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:19.162847  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:19.162692  184236 retry.go:31] will retry after 1.763426506s: waiting for domain to come up
	I0408 19:13:20.929127  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:20.929726  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:20.929763  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:20.929694  184236 retry.go:31] will retry after 2.307651894s: waiting for domain to come up
	I0408 19:13:23.239273  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:23.240056  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:23.240102  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:23.240000  184236 retry.go:31] will retry after 2.583795429s: waiting for domain to come up
	I0408 19:13:25.827026  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:25.827599  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:25.827686  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:25.827550  184236 retry.go:31] will retry after 3.077608977s: waiting for domain to come up
	I0408 19:13:28.907146  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:28.907784  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:28.907814  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:28.907724  184236 retry.go:31] will retry after 4.372978266s: waiting for domain to come up
	I0408 19:13:33.282858  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:33.283515  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find current IP address of domain kubernetes-upgrade-958400 in network mk-kubernetes-upgrade-958400
	I0408 19:13:33.283537  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | I0408 19:13:33.283463  184236 retry.go:31] will retry after 5.5805295s: waiting for domain to come up
	I0408 19:13:38.867293  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:38.867950  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) found domain IP: 192.168.50.182
	I0408 19:13:38.868002  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has current primary IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:38.868012  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) reserving static IP address...
	I0408 19:13:38.868628  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-958400", mac: "52:54:00:64:e2:54", ip: "192.168.50.182"} in network mk-kubernetes-upgrade-958400
	I0408 19:13:38.982798  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | Getting to WaitForSSH function...
	I0408 19:13:38.982839  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) reserved static IP address 192.168.50.182 for domain kubernetes-upgrade-958400
	I0408 19:13:38.982888  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) waiting for SSH...
	I0408 19:13:38.986951  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:38.987489  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:38.987522  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:38.987771  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | Using SSH client type: external
	I0408 19:13:38.987824  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | Using SSH private key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/id_rsa (-rw-------)
	I0408 19:13:38.987864  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 19:13:38.987888  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | About to run SSH command:
	I0408 19:13:38.987903  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | exit 0
	I0408 19:13:39.122910  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | SSH cmd err, output: <nil>: 
	I0408 19:13:39.123253  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) KVM machine creation complete
	I0408 19:13:39.123670  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetConfigRaw
	I0408 19:13:39.124375  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:13:39.124624  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:13:39.124852  181838 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 19:13:39.124871  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetState
	I0408 19:13:39.126585  181838 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 19:13:39.126611  181838 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 19:13:39.126618  181838 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 19:13:39.126627  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:13:39.130325  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.130745  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:39.130785  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.131123  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:13:39.131381  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:39.131614  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:39.131787  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:13:39.132002  181838 main.go:141] libmachine: Using SSH client type: native
	I0408 19:13:39.132339  181838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0408 19:13:39.132357  181838 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 19:13:39.237907  181838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:13:39.237958  181838 main.go:141] libmachine: Detecting the provisioner...
	I0408 19:13:39.237970  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:13:39.241809  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.242335  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:39.242378  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.242962  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:13:39.243291  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:39.243533  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:39.243684  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:13:39.243863  181838 main.go:141] libmachine: Using SSH client type: native
	I0408 19:13:39.244085  181838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0408 19:13:39.244098  181838 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 19:13:39.355297  181838 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 19:13:39.355399  181838 main.go:141] libmachine: found compatible host: buildroot
	I0408 19:13:39.355414  181838 main.go:141] libmachine: Provisioning with buildroot...
	I0408 19:13:39.355430  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetMachineName
	I0408 19:13:39.355770  181838 buildroot.go:166] provisioning hostname "kubernetes-upgrade-958400"
	I0408 19:13:39.355806  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetMachineName
	I0408 19:13:39.356036  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:13:39.359249  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.359803  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:39.359843  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.360019  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:13:39.360263  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:39.360596  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:39.360907  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:13:39.361206  181838 main.go:141] libmachine: Using SSH client type: native
	I0408 19:13:39.361439  181838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0408 19:13:39.361453  181838 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-958400 && echo "kubernetes-upgrade-958400" | sudo tee /etc/hostname
	I0408 19:13:39.487597  181838 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-958400
	
	I0408 19:13:39.487650  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:13:39.492868  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.494048  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:39.494088  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.494709  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:13:39.495223  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:39.495529  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:39.495794  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:13:39.496076  181838 main.go:141] libmachine: Using SSH client type: native
	I0408 19:13:39.496419  181838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0408 19:13:39.496447  181838 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-958400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-958400/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-958400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 19:13:39.621796  181838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:13:39.621867  181838 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20604-141129/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-141129/.minikube}
	I0408 19:13:39.621899  181838 buildroot.go:174] setting up certificates
	I0408 19:13:39.621920  181838 provision.go:84] configureAuth start
	I0408 19:13:39.621937  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetMachineName
	I0408 19:13:39.622298  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetIP
	I0408 19:13:39.626538  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.627338  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:39.627374  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.627680  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:13:39.631439  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.632498  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:39.632533  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.632872  181838 provision.go:143] copyHostCerts
	I0408 19:13:39.633076  181838 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem, removing ...
	I0408 19:13:39.633126  181838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem
	I0408 19:13:39.633209  181838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem (1082 bytes)
	I0408 19:13:39.633362  181838 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem, removing ...
	I0408 19:13:39.633375  181838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem
	I0408 19:13:39.633409  181838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem (1123 bytes)
	I0408 19:13:39.633495  181838 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem, removing ...
	I0408 19:13:39.633506  181838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem
	I0408 19:13:39.633536  181838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem (1679 bytes)
	I0408 19:13:39.633616  181838 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-958400 san=[127.0.0.1 192.168.50.182 kubernetes-upgrade-958400 localhost minikube]
	I0408 19:13:39.823265  181838 provision.go:177] copyRemoteCerts
	I0408 19:13:39.823356  181838 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 19:13:39.823393  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:13:39.828286  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.828785  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:39.828817  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:39.829365  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:13:39.829745  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:39.829993  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:13:39.830262  181838 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/id_rsa Username:docker}
	I0408 19:13:39.920913  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 19:13:39.953470  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0408 19:13:39.982509  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 19:13:40.013757  181838 provision.go:87] duration metric: took 391.816414ms to configureAuth
	I0408 19:13:40.013793  181838 buildroot.go:189] setting minikube options for container-runtime
	I0408 19:13:40.013998  181838 config.go:182] Loaded profile config "kubernetes-upgrade-958400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 19:13:40.014101  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:13:40.018388  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.019358  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:40.019409  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.019752  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:13:40.019989  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:40.020220  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:40.020406  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:13:40.020605  181838 main.go:141] libmachine: Using SSH client type: native
	I0408 19:13:40.020819  181838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0408 19:13:40.020835  181838 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 19:13:40.260523  181838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 19:13:40.260553  181838 main.go:141] libmachine: Checking connection to Docker...
	I0408 19:13:40.260562  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetURL
	I0408 19:13:40.262878  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | using libvirt version 6000000
	I0408 19:13:40.267023  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.267749  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:40.267786  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.268081  181838 main.go:141] libmachine: Docker is up and running!
	I0408 19:13:40.268097  181838 main.go:141] libmachine: Reticulating splines...
	I0408 19:13:40.268105  181838 client.go:171] duration metric: took 28.080050427s to LocalClient.Create
	I0408 19:13:40.268135  181838 start.go:167] duration metric: took 28.080143386s to libmachine.API.Create "kubernetes-upgrade-958400"
	I0408 19:13:40.268179  181838 start.go:293] postStartSetup for "kubernetes-upgrade-958400" (driver="kvm2")
	I0408 19:13:40.268193  181838 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 19:13:40.268209  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:13:40.268536  181838 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 19:13:40.268573  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:13:40.272116  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.272573  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:40.272607  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.272882  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:13:40.273295  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:40.273683  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:13:40.273963  181838 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/id_rsa Username:docker}
	I0408 19:13:40.358384  181838 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 19:13:40.363927  181838 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 19:13:40.363980  181838 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/addons for local assets ...
	I0408 19:13:40.364073  181838 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/files for local assets ...
	I0408 19:13:40.364191  181838 filesync.go:149] local asset: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem -> 1484872.pem in /etc/ssl/certs
	I0408 19:13:40.364316  181838 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 19:13:40.374790  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:13:40.405488  181838 start.go:296] duration metric: took 137.284466ms for postStartSetup
	I0408 19:13:40.405573  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetConfigRaw
	I0408 19:13:40.406458  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetIP
	I0408 19:13:40.410602  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.411160  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:40.411199  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.411570  181838 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/config.json ...
	I0408 19:13:40.411843  181838 start.go:128] duration metric: took 28.248162847s to createHost
	I0408 19:13:40.411876  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:13:40.415479  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.416060  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:40.416090  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.416346  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:13:40.416652  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:40.416846  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:40.417038  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:13:40.417235  181838 main.go:141] libmachine: Using SSH client type: native
	I0408 19:13:40.417540  181838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0408 19:13:40.417559  181838 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 19:13:40.523391  181838 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744139620.510368459
	
	I0408 19:13:40.523427  181838 fix.go:216] guest clock: 1744139620.510368459
	I0408 19:13:40.523438  181838 fix.go:229] Guest: 2025-04-08 19:13:40.510368459 +0000 UTC Remote: 2025-04-08 19:13:40.411861686 +0000 UTC m=+55.594817329 (delta=98.506773ms)
	I0408 19:13:40.523467  181838 fix.go:200] guest clock delta is within tolerance: 98.506773ms
	I0408 19:13:40.523476  181838 start.go:83] releasing machines lock for "kubernetes-upgrade-958400", held for 28.359959097s
	I0408 19:13:40.523522  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:13:40.523971  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetIP
	I0408 19:13:40.528423  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.529188  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:40.529217  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.529600  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:13:40.530465  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:13:40.530821  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:13:40.530932  181838 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 19:13:40.530992  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:13:40.531486  181838 ssh_runner.go:195] Run: cat /version.json
	I0408 19:13:40.531518  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:13:40.535382  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.535723  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.535809  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:40.535841  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.536230  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:13:40.536343  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:40.536367  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:40.536476  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:40.536721  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:13:40.536862  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:13:40.537027  181838 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/id_rsa Username:docker}
	I0408 19:13:40.537345  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:13:40.537624  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:13:40.538027  181838 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/id_rsa Username:docker}
	I0408 19:13:40.644632  181838 ssh_runner.go:195] Run: systemctl --version
	I0408 19:13:40.651766  181838 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 19:13:40.829217  181838 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 19:13:40.835676  181838 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 19:13:40.835771  181838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 19:13:40.854291  181838 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 19:13:40.854325  181838 start.go:495] detecting cgroup driver to use...
	I0408 19:13:40.854396  181838 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 19:13:40.878311  181838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 19:13:40.899514  181838 docker.go:217] disabling cri-docker service (if available) ...
	I0408 19:13:40.899583  181838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 19:13:40.920357  181838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 19:13:40.939422  181838 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 19:13:41.081139  181838 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 19:13:41.259909  181838 docker.go:233] disabling docker service ...
	I0408 19:13:41.259994  181838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 19:13:41.275864  181838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 19:13:41.292040  181838 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 19:13:41.441391  181838 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 19:13:41.588486  181838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 19:13:41.614446  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 19:13:41.635966  181838 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0408 19:13:41.636044  181838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:13:41.647477  181838 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 19:13:41.647547  181838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:13:41.658935  181838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:13:41.671304  181838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:13:41.681967  181838 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 19:13:41.694008  181838 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 19:13:41.704022  181838 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 19:13:41.704091  181838 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 19:13:41.717363  181838 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 19:13:41.727901  181838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:13:41.840186  181838 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 19:13:41.954895  181838 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 19:13:41.955003  181838 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 19:13:41.960501  181838 start.go:563] Will wait 60s for crictl version
	I0408 19:13:41.960577  181838 ssh_runner.go:195] Run: which crictl
	I0408 19:13:41.965336  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 19:13:42.010527  181838 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 19:13:42.010628  181838 ssh_runner.go:195] Run: crio --version
	I0408 19:13:42.042772  181838 ssh_runner.go:195] Run: crio --version
	I0408 19:13:42.080577  181838 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0408 19:13:42.082636  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetIP
	I0408 19:13:42.086382  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:42.086784  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:13:28 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:13:42.086820  181838 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:13:42.087301  181838 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0408 19:13:42.092223  181838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:13:42.108368  181838 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-958400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-958400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.182 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 19:13:42.108538  181838 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 19:13:42.108605  181838 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:13:42.142123  181838 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 19:13:42.142198  181838 ssh_runner.go:195] Run: which lz4
	I0408 19:13:42.146928  181838 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 19:13:42.152024  181838 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 19:13:42.152074  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0408 19:13:44.128864  181838 crio.go:462] duration metric: took 1.981979629s to copy over tarball
	I0408 19:13:44.128963  181838 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 19:13:47.057182  181838 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.92818146s)
	I0408 19:13:47.057224  181838 crio.go:469] duration metric: took 2.928321193s to extract the tarball
	I0408 19:13:47.057235  181838 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 19:13:47.110011  181838 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:13:47.157629  181838 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 19:13:47.157659  181838 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 19:13:47.157729  181838 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:13:47.157736  181838 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:13:47.157756  181838 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0408 19:13:47.157788  181838 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:13:47.157815  181838 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:13:47.157801  181838 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0408 19:13:47.157802  181838 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0408 19:13:47.157849  181838 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:13:47.159386  181838 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0408 19:13:47.159394  181838 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:13:47.159393  181838 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:13:47.159443  181838 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:13:47.159469  181838 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:13:47.159509  181838 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:13:47.159619  181838 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0408 19:13:47.159636  181838 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0408 19:13:47.318652  181838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:13:47.322742  181838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:13:47.325459  181838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0408 19:13:47.325796  181838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:13:47.335659  181838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0408 19:13:47.339234  181838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0408 19:13:47.359385  181838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:13:47.423158  181838 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0408 19:13:47.423220  181838 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:13:47.423279  181838 ssh_runner.go:195] Run: which crictl
	I0408 19:13:47.487660  181838 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0408 19:13:47.487717  181838 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:13:47.487770  181838 ssh_runner.go:195] Run: which crictl
	I0408 19:13:47.513912  181838 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0408 19:13:47.514022  181838 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0408 19:13:47.514051  181838 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0408 19:13:47.514060  181838 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0408 19:13:47.513952  181838 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0408 19:13:47.514151  181838 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:13:47.514201  181838 ssh_runner.go:195] Run: which crictl
	I0408 19:13:47.514111  181838 ssh_runner.go:195] Run: which crictl
	I0408 19:13:47.514111  181838 ssh_runner.go:195] Run: which crictl
	I0408 19:13:47.521503  181838 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0408 19:13:47.521566  181838 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0408 19:13:47.521629  181838 ssh_runner.go:195] Run: which crictl
	I0408 19:13:47.535602  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:13:47.535716  181838 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0408 19:13:47.535743  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 19:13:47.535772  181838 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:13:47.535840  181838 ssh_runner.go:195] Run: which crictl
	I0408 19:13:47.535857  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 19:13:47.535779  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:13:47.535905  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:13:47.535966  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 19:13:47.684284  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:13:47.684376  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 19:13:47.684424  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 19:13:47.684445  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:13:47.684525  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:13:47.684610  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:13:47.684650  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 19:13:47.861401  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 19:13:47.861453  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 19:13:47.861481  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:13:47.861513  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:13:47.861514  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:13:47.861588  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 19:13:47.861617  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:13:47.997770  181838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0408 19:13:48.055807  181838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0408 19:13:48.055866  181838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0408 19:13:48.055910  181838 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:13:48.055924  181838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0408 19:13:48.056083  181838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0408 19:13:48.056127  181838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0408 19:13:48.091564  181838 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0408 19:13:48.623036  181838 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:13:48.774670  181838 cache_images.go:92] duration metric: took 1.616985941s to LoadCachedImages
	W0408 19:13:48.774788  181838 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0408 19:13:48.774807  181838 kubeadm.go:934] updating node { 192.168.50.182 8443 v1.20.0 crio true true} ...
	I0408 19:13:48.774927  181838 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-958400 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-958400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 19:13:48.775001  181838 ssh_runner.go:195] Run: crio config
	I0408 19:13:48.831300  181838 cni.go:84] Creating CNI manager for ""
	I0408 19:13:48.831342  181838 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:13:48.831352  181838 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 19:13:48.831371  181838 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.182 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-958400 NodeName:kubernetes-upgrade-958400 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 19:13:48.831501  181838 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-958400"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 19:13:48.831563  181838 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 19:13:48.842637  181838 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 19:13:48.842728  181838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 19:13:48.853196  181838 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0408 19:13:48.873939  181838 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 19:13:48.893315  181838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0408 19:13:48.916920  181838 ssh_runner.go:195] Run: grep 192.168.50.182	control-plane.minikube.internal$ /etc/hosts
	I0408 19:13:48.922920  181838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:13:48.941799  181838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:13:49.070702  181838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:13:49.090650  181838 certs.go:68] Setting up /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400 for IP: 192.168.50.182
	I0408 19:13:49.090682  181838 certs.go:194] generating shared ca certs ...
	I0408 19:13:49.090700  181838 certs.go:226] acquiring lock for ca certs: {Name:mkd37ce74a5e6f5f5300314397402f7d571fc230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:13:49.090891  181838 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key
	I0408 19:13:49.090937  181838 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key
	I0408 19:13:49.090952  181838 certs.go:256] generating profile certs ...
	I0408 19:13:49.091025  181838 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/client.key
	I0408 19:13:49.091044  181838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/client.crt with IP's: []
	I0408 19:13:49.362747  181838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/client.crt ...
	I0408 19:13:49.362799  181838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/client.crt: {Name:mk6aa46c1487a29b856dd7dfb55bada2df735381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:13:49.363044  181838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/client.key ...
	I0408 19:13:49.363064  181838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/client.key: {Name:mk186be9d71426e9a74cc0248baa204c80874931 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:13:49.363185  181838 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.key.d506f96d
	I0408 19:13:49.363209  181838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.crt.d506f96d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.182]
	I0408 19:13:49.438237  181838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.crt.d506f96d ...
	I0408 19:13:49.438273  181838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.crt.d506f96d: {Name:mk89e2d9b69c848b36e3eef97422b0a5340fe800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:13:49.438437  181838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.key.d506f96d ...
	I0408 19:13:49.438452  181838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.key.d506f96d: {Name:mk53632651f73aedd9c7258a39174904c13d858e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:13:49.438528  181838 certs.go:381] copying /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.crt.d506f96d -> /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.crt
	I0408 19:13:49.438606  181838 certs.go:385] copying /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.key.d506f96d -> /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.key
	I0408 19:13:49.438660  181838 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/proxy-client.key
	I0408 19:13:49.438681  181838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/proxy-client.crt with IP's: []
	I0408 19:13:49.880723  181838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/proxy-client.crt ...
	I0408 19:13:49.880757  181838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/proxy-client.crt: {Name:mk4caef3777114a9dcc73b6d374a6333f9f4c489 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:13:49.880951  181838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/proxy-client.key ...
	I0408 19:13:49.880977  181838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/proxy-client.key: {Name:mkea38f26fbcfc7428a294e76a8dff5eac3cc68f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:13:49.881230  181838 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem (1338 bytes)
	W0408 19:13:49.881276  181838 certs.go:480] ignoring /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487_empty.pem, impossibly tiny 0 bytes
	I0408 19:13:49.881287  181838 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 19:13:49.881309  181838 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem (1082 bytes)
	I0408 19:13:49.881333  181838 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem (1123 bytes)
	I0408 19:13:49.881357  181838 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem (1679 bytes)
	I0408 19:13:49.881393  181838 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:13:49.881998  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 19:13:49.911274  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 19:13:49.943338  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 19:13:49.989221  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 19:13:50.035525  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0408 19:13:50.072118  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 19:13:50.099111  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 19:13:50.135133  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 19:13:50.166339  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 19:13:50.195480  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem --> /usr/share/ca-certificates/148487.pem (1338 bytes)
	I0408 19:13:50.223576  181838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /usr/share/ca-certificates/1484872.pem (1708 bytes)
	I0408 19:13:50.253566  181838 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 19:13:50.273631  181838 ssh_runner.go:195] Run: openssl version
	I0408 19:13:50.280305  181838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1484872.pem && ln -fs /usr/share/ca-certificates/1484872.pem /etc/ssl/certs/1484872.pem"
	I0408 19:13:50.294544  181838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1484872.pem
	I0408 19:13:50.300162  181838 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 18:21 /usr/share/ca-certificates/1484872.pem
	I0408 19:13:50.300239  181838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1484872.pem
	I0408 19:13:50.307233  181838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1484872.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 19:13:50.321465  181838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 19:13:50.335910  181838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:13:50.340821  181838 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:13:50.340900  181838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:13:50.346659  181838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 19:13:50.359114  181838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148487.pem && ln -fs /usr/share/ca-certificates/148487.pem /etc/ssl/certs/148487.pem"
	I0408 19:13:50.372291  181838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148487.pem
	I0408 19:13:50.377338  181838 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 18:21 /usr/share/ca-certificates/148487.pem
	I0408 19:13:50.377417  181838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148487.pem
	I0408 19:13:50.383991  181838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/148487.pem /etc/ssl/certs/51391683.0"
	I0408 19:13:50.397260  181838 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 19:13:50.402308  181838 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 19:13:50.402382  181838 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-958400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-958400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.182 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:13:50.402469  181838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 19:13:50.402564  181838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:13:50.446730  181838 cri.go:89] found id: ""
	I0408 19:13:50.446836  181838 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 19:13:50.459129  181838 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 19:13:50.470390  181838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:13:50.481061  181838 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:13:50.481088  181838 kubeadm.go:157] found existing configuration files:
	
	I0408 19:13:50.481148  181838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:13:50.491913  181838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:13:50.491995  181838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:13:50.503871  181838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:13:50.516296  181838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:13:50.516383  181838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:13:50.529647  181838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:13:50.540927  181838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:13:50.541003  181838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:13:50.552577  181838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:13:50.564815  181838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:13:50.564882  181838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:13:50.577445  181838 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 19:13:50.893661  181838 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:15:48.847900  181838 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 19:15:48.848066  181838 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0408 19:15:48.850064  181838 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0408 19:15:48.850155  181838 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 19:15:48.850277  181838 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 19:15:48.850412  181838 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 19:15:48.850526  181838 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 19:15:48.850621  181838 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 19:15:49.027093  181838 out.go:235]   - Generating certificates and keys ...
	I0408 19:15:49.027241  181838 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 19:15:49.027368  181838 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 19:15:49.027484  181838 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 19:15:49.027562  181838 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0408 19:15:49.027678  181838 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0408 19:15:49.027749  181838 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0408 19:15:49.027857  181838 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0408 19:15:49.028060  181838 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-958400 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	I0408 19:15:49.028146  181838 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0408 19:15:49.028312  181838 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-958400 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	I0408 19:15:49.028395  181838 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 19:15:49.028474  181838 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 19:15:49.028534  181838 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0408 19:15:49.028634  181838 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 19:15:49.028713  181838 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 19:15:49.028800  181838 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 19:15:49.028928  181838 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 19:15:49.029013  181838 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 19:15:49.029174  181838 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 19:15:49.029298  181838 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 19:15:49.029373  181838 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 19:15:49.029466  181838 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 19:15:49.212964  181838 out.go:235]   - Booting up control plane ...
	I0408 19:15:49.213131  181838 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 19:15:49.213228  181838 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 19:15:49.213320  181838 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 19:15:49.213450  181838 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 19:15:49.213683  181838 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 19:15:49.213772  181838 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0408 19:15:49.213915  181838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:15:49.214232  181838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:15:49.214342  181838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:15:49.214620  181838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:15:49.214727  181838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:15:49.214954  181838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:15:49.215035  181838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:15:49.215266  181838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:15:49.215397  181838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:15:49.215699  181838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:15:49.215714  181838 kubeadm.go:310] 
	I0408 19:15:49.215764  181838 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0408 19:15:49.215815  181838 kubeadm.go:310] 		timed out waiting for the condition
	I0408 19:15:49.215824  181838 kubeadm.go:310] 
	I0408 19:15:49.215870  181838 kubeadm.go:310] 	This error is likely caused by:
	I0408 19:15:49.215914  181838 kubeadm.go:310] 		- The kubelet is not running
	I0408 19:15:49.216041  181838 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 19:15:49.216055  181838 kubeadm.go:310] 
	I0408 19:15:49.216180  181838 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 19:15:49.216223  181838 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0408 19:15:49.216263  181838 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0408 19:15:49.216269  181838 kubeadm.go:310] 
	I0408 19:15:49.216411  181838 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 19:15:49.216514  181838 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 19:15:49.216521  181838 kubeadm.go:310] 
	I0408 19:15:49.216643  181838 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 19:15:49.216763  181838 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 19:15:49.216866  181838 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0408 19:15:49.216964  181838 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0408 19:15:49.217265  181838 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-958400 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-958400 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-958400 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-958400 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 19:15:49.217325  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 19:15:49.217795  181838 kubeadm.go:310] 
	I0408 19:15:51.232778  181838 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.015410949s)
	I0408 19:15:51.232881  181838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 19:15:51.247701  181838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:15:51.258932  181838 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:15:51.258956  181838 kubeadm.go:157] found existing configuration files:
	
	I0408 19:15:51.259016  181838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:15:51.272350  181838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:15:51.272427  181838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:15:51.284516  181838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:15:51.294551  181838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:15:51.294619  181838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:15:51.305292  181838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:15:51.315318  181838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:15:51.315389  181838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:15:51.326443  181838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:15:51.339291  181838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:15:51.339380  181838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:15:51.352753  181838 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 19:15:51.440811  181838 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0408 19:15:51.440901  181838 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 19:15:51.599457  181838 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 19:15:51.599600  181838 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 19:15:51.599750  181838 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 19:15:51.811945  181838 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 19:15:51.958715  181838 out.go:235]   - Generating certificates and keys ...
	I0408 19:15:51.958884  181838 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 19:15:51.959008  181838 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 19:15:51.959162  181838 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 19:15:51.959278  181838 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0408 19:15:51.959402  181838 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 19:15:51.959492  181838 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0408 19:15:51.959607  181838 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0408 19:15:51.959728  181838 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0408 19:15:51.959848  181838 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 19:15:51.959954  181838 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 19:15:51.960007  181838 kubeadm.go:310] [certs] Using the existing "sa" key
	I0408 19:15:51.960087  181838 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 19:15:51.960157  181838 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 19:15:52.066815  181838 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 19:15:52.154338  181838 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 19:15:52.468090  181838 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 19:15:52.493011  181838 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 19:15:52.493254  181838 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 19:15:52.493373  181838 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 19:15:52.625171  181838 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 19:15:52.627556  181838 out.go:235]   - Booting up control plane ...
	I0408 19:15:52.627697  181838 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 19:15:52.634877  181838 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 19:15:52.636099  181838 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 19:15:52.637290  181838 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 19:15:52.649726  181838 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 19:16:32.651277  181838 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0408 19:16:32.651408  181838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:16:32.651697  181838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:16:37.651782  181838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:16:37.651991  181838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:16:47.652377  181838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:16:47.652695  181838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:17:07.653425  181838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:17:07.653698  181838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:17:47.653169  181838 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:17:47.653482  181838 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:17:47.653500  181838 kubeadm.go:310] 
	I0408 19:17:47.653557  181838 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0408 19:17:47.653621  181838 kubeadm.go:310] 		timed out waiting for the condition
	I0408 19:17:47.653631  181838 kubeadm.go:310] 
	I0408 19:17:47.653680  181838 kubeadm.go:310] 	This error is likely caused by:
	I0408 19:17:47.653746  181838 kubeadm.go:310] 		- The kubelet is not running
	I0408 19:17:47.653935  181838 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 19:17:47.653953  181838 kubeadm.go:310] 
	I0408 19:17:47.654093  181838 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 19:17:47.654140  181838 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0408 19:17:47.654192  181838 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0408 19:17:47.654203  181838 kubeadm.go:310] 
	I0408 19:17:47.654316  181838 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 19:17:47.654419  181838 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 19:17:47.654429  181838 kubeadm.go:310] 
	I0408 19:17:47.654583  181838 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 19:17:47.654700  181838 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 19:17:47.654801  181838 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0408 19:17:47.654887  181838 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 19:17:47.654899  181838 kubeadm.go:310] 
	I0408 19:17:47.655285  181838 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:17:47.655423  181838 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 19:17:47.655613  181838 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0408 19:17:47.655631  181838 kubeadm.go:394] duration metric: took 3m57.253254661s to StartCluster
	I0408 19:17:47.655700  181838 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:17:47.655761  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:17:47.700039  181838 cri.go:89] found id: ""
	I0408 19:17:47.700068  181838 logs.go:282] 0 containers: []
	W0408 19:17:47.700079  181838 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:17:47.700088  181838 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:17:47.700157  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:17:47.744913  181838 cri.go:89] found id: ""
	I0408 19:17:47.744943  181838 logs.go:282] 0 containers: []
	W0408 19:17:47.744954  181838 logs.go:284] No container was found matching "etcd"
	I0408 19:17:47.744962  181838 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:17:47.745024  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:17:47.784666  181838 cri.go:89] found id: ""
	I0408 19:17:47.784693  181838 logs.go:282] 0 containers: []
	W0408 19:17:47.784701  181838 logs.go:284] No container was found matching "coredns"
	I0408 19:17:47.784710  181838 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:17:47.784765  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:17:47.822653  181838 cri.go:89] found id: ""
	I0408 19:17:47.822680  181838 logs.go:282] 0 containers: []
	W0408 19:17:47.822688  181838 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:17:47.822694  181838 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:17:47.822756  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:17:47.857167  181838 cri.go:89] found id: ""
	I0408 19:17:47.857190  181838 logs.go:282] 0 containers: []
	W0408 19:17:47.857198  181838 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:17:47.857204  181838 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:17:47.857259  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:17:47.892211  181838 cri.go:89] found id: ""
	I0408 19:17:47.892246  181838 logs.go:282] 0 containers: []
	W0408 19:17:47.892258  181838 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:17:47.892266  181838 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:17:47.892337  181838 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:17:47.924779  181838 cri.go:89] found id: ""
	I0408 19:17:47.924815  181838 logs.go:282] 0 containers: []
	W0408 19:17:47.924827  181838 logs.go:284] No container was found matching "kindnet"
	I0408 19:17:47.924841  181838 logs.go:123] Gathering logs for dmesg ...
	I0408 19:17:47.924859  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:17:47.945108  181838 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:17:47.945138  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:17:48.083887  181838 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:17:48.083917  181838 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:17:48.083932  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:17:48.194717  181838 logs.go:123] Gathering logs for container status ...
	I0408 19:17:48.194756  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:17:48.247462  181838 logs.go:123] Gathering logs for kubelet ...
	I0408 19:17:48.247498  181838 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 19:17:48.300224  181838 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 19:17:48.300321  181838 out.go:270] * 
	* 
	W0408 19:17:48.300380  181838 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 19:17:48.300395  181838 out.go:270] * 
	* 
	W0408 19:17:48.301244  181838 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 19:17:48.304370  181838 out.go:201] 
	W0408 19:17:48.305812  181838 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 19:17:48.305904  181838 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 19:17:48.305931  181838 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 19:17:48.307749  181838 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-958400 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-958400
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-958400: (6.352019558s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-958400 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-958400 status --format={{.Host}}: exit status 7 (78.528141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-958400 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-958400 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.288244204s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-958400 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-958400 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-958400 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (93.759898ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-958400] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20604
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-958400
	    minikube start -p kubernetes-upgrade-958400 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9584002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-958400 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-958400 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-958400 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.907264069s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-04-08 19:20:02.154839733 +0000 UTC m=+4037.134023936
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-958400 -n kubernetes-upgrade-958400
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-958400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-958400 logs -n 25: (1.789509271s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-446442                       | pause-446442              | jenkins | v1.35.0 | 08 Apr 25 19:16 UTC | 08 Apr 25 19:16 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-446442                       | pause-446442              | jenkins | v1.35.0 | 08 Apr 25 19:16 UTC | 08 Apr 25 19:16 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-446442                       | pause-446442              | jenkins | v1.35.0 | 08 Apr 25 19:16 UTC | 08 Apr 25 19:16 UTC |
	| start   | -p cert-expiration-705566             | cert-expiration-705566    | jenkins | v1.35.0 | 08 Apr 25 19:16 UTC | 08 Apr 25 19:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-378868             | running-upgrade-378868    | jenkins | v1.35.0 | 08 Apr 25 19:16 UTC | 08 Apr 25 19:16 UTC |
	| start   | -p cert-options-530977                | cert-options-530977       | jenkins | v1.35.0 | 08 Apr 25 19:16 UTC | 08 Apr 25 19:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-006114                | NoKubernetes-006114       | jenkins | v1.35.0 | 08 Apr 25 19:16 UTC | 08 Apr 25 19:17 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-006114                | NoKubernetes-006114       | jenkins | v1.35.0 | 08 Apr 25 19:17 UTC | 08 Apr 25 19:17 UTC |
	| start   | -p NoKubernetes-006114                | NoKubernetes-006114       | jenkins | v1.35.0 | 08 Apr 25 19:17 UTC | 08 Apr 25 19:18 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-530977 ssh               | cert-options-530977       | jenkins | v1.35.0 | 08 Apr 25 19:17 UTC | 08 Apr 25 19:17 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-530977 -- sudo        | cert-options-530977       | jenkins | v1.35.0 | 08 Apr 25 19:17 UTC | 08 Apr 25 19:17 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-530977                | cert-options-530977       | jenkins | v1.35.0 | 08 Apr 25 19:17 UTC | 08 Apr 25 19:17 UTC |
	| start   | -p force-systemd-env-466042           | force-systemd-env-466042  | jenkins | v1.35.0 | 08 Apr 25 19:17 UTC | 08 Apr 25 19:18 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-958400          | kubernetes-upgrade-958400 | jenkins | v1.35.0 | 08 Apr 25 19:17 UTC | 08 Apr 25 19:17 UTC |
	| start   | -p kubernetes-upgrade-958400          | kubernetes-upgrade-958400 | jenkins | v1.35.0 | 08 Apr 25 19:17 UTC | 08 Apr 25 19:19 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-006114 sudo           | NoKubernetes-006114       | jenkins | v1.35.0 | 08 Apr 25 19:18 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-006114                | NoKubernetes-006114       | jenkins | v1.35.0 | 08 Apr 25 19:18 UTC | 08 Apr 25 19:18 UTC |
	| start   | -p NoKubernetes-006114                | NoKubernetes-006114       | jenkins | v1.35.0 | 08 Apr 25 19:18 UTC | 08 Apr 25 19:19 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-466042           | force-systemd-env-466042  | jenkins | v1.35.0 | 08 Apr 25 19:18 UTC | 08 Apr 25 19:18 UTC |
	| start   | -p auto-880875 --memory=3072          | auto-880875               | jenkins | v1.35.0 | 08 Apr 25 19:18 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-958400          | kubernetes-upgrade-958400 | jenkins | v1.35.0 | 08 Apr 25 19:19 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-958400          | kubernetes-upgrade-958400 | jenkins | v1.35.0 | 08 Apr 25 19:19 UTC | 08 Apr 25 19:20 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-006114 sudo           | NoKubernetes-006114       | jenkins | v1.35.0 | 08 Apr 25 19:19 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-006114                | NoKubernetes-006114       | jenkins | v1.35.0 | 08 Apr 25 19:19 UTC | 08 Apr 25 19:19 UTC |
	| start   | -p kindnet-880875                     | kindnet-880875            | jenkins | v1.35.0 | 08 Apr 25 19:19 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 19:19:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 19:19:15.479573  190277 out.go:345] Setting OutFile to fd 1 ...
	I0408 19:19:15.479701  190277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:19:15.479711  190277 out.go:358] Setting ErrFile to fd 2...
	I0408 19:19:15.479715  190277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:19:15.479955  190277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 19:19:15.480711  190277 out.go:352] Setting JSON to false
	I0408 19:19:15.481959  190277 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10901,"bootTime":1744129055,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 19:19:15.482043  190277 start.go:139] virtualization: kvm guest
	I0408 19:19:15.484332  190277 out.go:177] * [kindnet-880875] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 19:19:15.485992  190277 out.go:177]   - MINIKUBE_LOCATION=20604
	I0408 19:19:15.486028  190277 notify.go:220] Checking for updates...
	I0408 19:19:15.488741  190277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:19:15.489931  190277 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:19:15.491336  190277 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:19:15.492786  190277 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 19:19:15.494598  190277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:19:15.497187  190277 config.go:182] Loaded profile config "auto-880875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:19:15.497338  190277 config.go:182] Loaded profile config "cert-expiration-705566": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:19:15.497475  190277 config.go:182] Loaded profile config "kubernetes-upgrade-958400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:19:15.497611  190277 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 19:19:15.539918  190277 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 19:19:15.541329  190277 start.go:297] selected driver: kvm2
	I0408 19:19:15.541358  190277 start.go:901] validating driver "kvm2" against <nil>
	I0408 19:19:15.541378  190277 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:19:15.542772  190277 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:19:15.542897  190277 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20604-141129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 19:19:15.566474  190277 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 19:19:15.566544  190277 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 19:19:15.566833  190277 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 19:19:15.566874  190277 cni.go:84] Creating CNI manager for "kindnet"
	I0408 19:19:15.566898  190277 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 19:19:15.566996  190277 start.go:340] cluster config:
	{Name:kindnet-880875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kindnet-880875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:19:15.567137  190277 iso.go:125] acquiring lock: {Name:mk6f89956dcd0ccd06b3c273592988c0e077c69a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:19:15.569183  190277 out.go:177] * Starting "kindnet-880875" primary control-plane node in "kindnet-880875" cluster
	I0408 19:19:12.424849  189782 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 19:19:12.425633  189782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:19:12.425711  189782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:19:12.443911  189782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43361
	I0408 19:19:12.444447  189782 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:19:12.445090  189782 main.go:141] libmachine: Using API Version  1
	I0408 19:19:12.445119  189782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:19:12.445532  189782 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:19:12.445767  189782 main.go:141] libmachine: (auto-880875) Calling .GetMachineName
	I0408 19:19:12.445929  189782 main.go:141] libmachine: (auto-880875) Calling .DriverName
	I0408 19:19:12.446154  189782 start.go:159] libmachine.API.Create for "auto-880875" (driver="kvm2")
	I0408 19:19:12.446205  189782 client.go:168] LocalClient.Create starting
	I0408 19:19:12.446246  189782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem
	I0408 19:19:12.446293  189782 main.go:141] libmachine: Decoding PEM data...
	I0408 19:19:12.446325  189782 main.go:141] libmachine: Parsing certificate...
	I0408 19:19:12.446405  189782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem
	I0408 19:19:12.446432  189782 main.go:141] libmachine: Decoding PEM data...
	I0408 19:19:12.446450  189782 main.go:141] libmachine: Parsing certificate...
	I0408 19:19:12.446474  189782 main.go:141] libmachine: Running pre-create checks...
	I0408 19:19:12.446488  189782 main.go:141] libmachine: (auto-880875) Calling .PreCreateCheck
	I0408 19:19:12.446956  189782 main.go:141] libmachine: (auto-880875) Calling .GetConfigRaw
	I0408 19:19:12.447508  189782 main.go:141] libmachine: Creating machine...
	I0408 19:19:12.447526  189782 main.go:141] libmachine: (auto-880875) Calling .Create
	I0408 19:19:12.447747  189782 main.go:141] libmachine: (auto-880875) creating KVM machine...
	I0408 19:19:12.447771  189782 main.go:141] libmachine: (auto-880875) creating network...
	I0408 19:19:12.449370  189782 main.go:141] libmachine: (auto-880875) DBG | found existing default KVM network
	I0408 19:19:12.451113  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:12.450881  190071 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000204b90}
	I0408 19:19:12.451199  189782 main.go:141] libmachine: (auto-880875) DBG | created network xml: 
	I0408 19:19:12.451222  189782 main.go:141] libmachine: (auto-880875) DBG | <network>
	I0408 19:19:12.451233  189782 main.go:141] libmachine: (auto-880875) DBG |   <name>mk-auto-880875</name>
	I0408 19:19:12.451241  189782 main.go:141] libmachine: (auto-880875) DBG |   <dns enable='no'/>
	I0408 19:19:12.451248  189782 main.go:141] libmachine: (auto-880875) DBG |   
	I0408 19:19:12.451260  189782 main.go:141] libmachine: (auto-880875) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0408 19:19:12.451269  189782 main.go:141] libmachine: (auto-880875) DBG |     <dhcp>
	I0408 19:19:12.451279  189782 main.go:141] libmachine: (auto-880875) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0408 19:19:12.451288  189782 main.go:141] libmachine: (auto-880875) DBG |     </dhcp>
	I0408 19:19:12.451295  189782 main.go:141] libmachine: (auto-880875) DBG |   </ip>
	I0408 19:19:12.451302  189782 main.go:141] libmachine: (auto-880875) DBG |   
	I0408 19:19:12.451308  189782 main.go:141] libmachine: (auto-880875) DBG | </network>
	I0408 19:19:12.451317  189782 main.go:141] libmachine: (auto-880875) DBG | 
	I0408 19:19:12.457521  189782 main.go:141] libmachine: (auto-880875) DBG | trying to create private KVM network mk-auto-880875 192.168.39.0/24...
	I0408 19:19:12.542822  189782 main.go:141] libmachine: (auto-880875) setting up store path in /home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875 ...
	I0408 19:19:12.542852  189782 main.go:141] libmachine: (auto-880875) DBG | private KVM network mk-auto-880875 192.168.39.0/24 created
	I0408 19:19:12.542866  189782 main.go:141] libmachine: (auto-880875) building disk image from file:///home/jenkins/minikube-integration/20604-141129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0408 19:19:12.542877  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:12.542727  190071 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:19:12.542892  189782 main.go:141] libmachine: (auto-880875) Downloading /home/jenkins/minikube-integration/20604-141129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20604-141129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0408 19:19:12.869249  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:12.869030  190071 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875/id_rsa...
	I0408 19:19:13.340229  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:13.340053  190071 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875/auto-880875.rawdisk...
	I0408 19:19:13.340261  189782 main.go:141] libmachine: (auto-880875) DBG | Writing magic tar header
	I0408 19:19:13.340275  189782 main.go:141] libmachine: (auto-880875) DBG | Writing SSH key tar header
	I0408 19:19:13.340287  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:13.340193  190071 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875 ...
	I0408 19:19:13.340305  189782 main.go:141] libmachine: (auto-880875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875
	I0408 19:19:13.340334  189782 main.go:141] libmachine: (auto-880875) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875 (perms=drwx------)
	I0408 19:19:13.340355  189782 main.go:141] libmachine: (auto-880875) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube/machines (perms=drwxr-xr-x)
	I0408 19:19:13.340369  189782 main.go:141] libmachine: (auto-880875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube/machines
	I0408 19:19:13.340383  189782 main.go:141] libmachine: (auto-880875) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube (perms=drwxr-xr-x)
	I0408 19:19:13.340454  189782 main.go:141] libmachine: (auto-880875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:19:13.340490  189782 main.go:141] libmachine: (auto-880875) setting executable bit set on /home/jenkins/minikube-integration/20604-141129 (perms=drwxrwxr-x)
	I0408 19:19:13.340501  189782 main.go:141] libmachine: (auto-880875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129
	I0408 19:19:13.340518  189782 main.go:141] libmachine: (auto-880875) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0408 19:19:13.340529  189782 main.go:141] libmachine: (auto-880875) DBG | checking permissions on dir: /home/jenkins
	I0408 19:19:13.340540  189782 main.go:141] libmachine: (auto-880875) DBG | checking permissions on dir: /home
	I0408 19:19:13.340550  189782 main.go:141] libmachine: (auto-880875) DBG | skipping /home - not owner
	I0408 19:19:13.340563  189782 main.go:141] libmachine: (auto-880875) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 19:19:13.340573  189782 main.go:141] libmachine: (auto-880875) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 19:19:13.340583  189782 main.go:141] libmachine: (auto-880875) creating domain...
	I0408 19:19:13.341984  189782 main.go:141] libmachine: (auto-880875) define libvirt domain using xml: 
	I0408 19:19:13.342005  189782 main.go:141] libmachine: (auto-880875) <domain type='kvm'>
	I0408 19:19:13.342014  189782 main.go:141] libmachine: (auto-880875)   <name>auto-880875</name>
	I0408 19:19:13.342020  189782 main.go:141] libmachine: (auto-880875)   <memory unit='MiB'>3072</memory>
	I0408 19:19:13.342028  189782 main.go:141] libmachine: (auto-880875)   <vcpu>2</vcpu>
	I0408 19:19:13.342033  189782 main.go:141] libmachine: (auto-880875)   <features>
	I0408 19:19:13.342040  189782 main.go:141] libmachine: (auto-880875)     <acpi/>
	I0408 19:19:13.342046  189782 main.go:141] libmachine: (auto-880875)     <apic/>
	I0408 19:19:13.342058  189782 main.go:141] libmachine: (auto-880875)     <pae/>
	I0408 19:19:13.342064  189782 main.go:141] libmachine: (auto-880875)     
	I0408 19:19:13.342070  189782 main.go:141] libmachine: (auto-880875)   </features>
	I0408 19:19:13.342077  189782 main.go:141] libmachine: (auto-880875)   <cpu mode='host-passthrough'>
	I0408 19:19:13.342083  189782 main.go:141] libmachine: (auto-880875)   
	I0408 19:19:13.342089  189782 main.go:141] libmachine: (auto-880875)   </cpu>
	I0408 19:19:13.342096  189782 main.go:141] libmachine: (auto-880875)   <os>
	I0408 19:19:13.342101  189782 main.go:141] libmachine: (auto-880875)     <type>hvm</type>
	I0408 19:19:13.342109  189782 main.go:141] libmachine: (auto-880875)     <boot dev='cdrom'/>
	I0408 19:19:13.342114  189782 main.go:141] libmachine: (auto-880875)     <boot dev='hd'/>
	I0408 19:19:13.342122  189782 main.go:141] libmachine: (auto-880875)     <bootmenu enable='no'/>
	I0408 19:19:13.342127  189782 main.go:141] libmachine: (auto-880875)   </os>
	I0408 19:19:13.342133  189782 main.go:141] libmachine: (auto-880875)   <devices>
	I0408 19:19:13.342139  189782 main.go:141] libmachine: (auto-880875)     <disk type='file' device='cdrom'>
	I0408 19:19:13.342153  189782 main.go:141] libmachine: (auto-880875)       <source file='/home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875/boot2docker.iso'/>
	I0408 19:19:13.342162  189782 main.go:141] libmachine: (auto-880875)       <target dev='hdc' bus='scsi'/>
	I0408 19:19:13.342170  189782 main.go:141] libmachine: (auto-880875)       <readonly/>
	I0408 19:19:13.342176  189782 main.go:141] libmachine: (auto-880875)     </disk>
	I0408 19:19:13.342184  189782 main.go:141] libmachine: (auto-880875)     <disk type='file' device='disk'>
	I0408 19:19:13.342191  189782 main.go:141] libmachine: (auto-880875)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 19:19:13.342201  189782 main.go:141] libmachine: (auto-880875)       <source file='/home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875/auto-880875.rawdisk'/>
	I0408 19:19:13.342208  189782 main.go:141] libmachine: (auto-880875)       <target dev='hda' bus='virtio'/>
	I0408 19:19:13.342214  189782 main.go:141] libmachine: (auto-880875)     </disk>
	I0408 19:19:13.342221  189782 main.go:141] libmachine: (auto-880875)     <interface type='network'>
	I0408 19:19:13.342229  189782 main.go:141] libmachine: (auto-880875)       <source network='mk-auto-880875'/>
	I0408 19:19:13.342236  189782 main.go:141] libmachine: (auto-880875)       <model type='virtio'/>
	I0408 19:19:13.342243  189782 main.go:141] libmachine: (auto-880875)     </interface>
	I0408 19:19:13.342248  189782 main.go:141] libmachine: (auto-880875)     <interface type='network'>
	I0408 19:19:13.342256  189782 main.go:141] libmachine: (auto-880875)       <source network='default'/>
	I0408 19:19:13.342262  189782 main.go:141] libmachine: (auto-880875)       <model type='virtio'/>
	I0408 19:19:13.342278  189782 main.go:141] libmachine: (auto-880875)     </interface>
	I0408 19:19:13.342285  189782 main.go:141] libmachine: (auto-880875)     <serial type='pty'>
	I0408 19:19:13.342293  189782 main.go:141] libmachine: (auto-880875)       <target port='0'/>
	I0408 19:19:13.342298  189782 main.go:141] libmachine: (auto-880875)     </serial>
	I0408 19:19:13.342305  189782 main.go:141] libmachine: (auto-880875)     <console type='pty'>
	I0408 19:19:13.342318  189782 main.go:141] libmachine: (auto-880875)       <target type='serial' port='0'/>
	I0408 19:19:13.342326  189782 main.go:141] libmachine: (auto-880875)     </console>
	I0408 19:19:13.342332  189782 main.go:141] libmachine: (auto-880875)     <rng model='virtio'>
	I0408 19:19:13.342342  189782 main.go:141] libmachine: (auto-880875)       <backend model='random'>/dev/random</backend>
	I0408 19:19:13.342348  189782 main.go:141] libmachine: (auto-880875)     </rng>
	I0408 19:19:13.342355  189782 main.go:141] libmachine: (auto-880875)     
	I0408 19:19:13.342361  189782 main.go:141] libmachine: (auto-880875)     
	I0408 19:19:13.342373  189782 main.go:141] libmachine: (auto-880875)   </devices>
	I0408 19:19:13.342379  189782 main.go:141] libmachine: (auto-880875) </domain>
	I0408 19:19:13.342391  189782 main.go:141] libmachine: (auto-880875) 
	I0408 19:19:13.347112  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:12:3c:aa in network default
	I0408 19:19:13.347846  189782 main.go:141] libmachine: (auto-880875) starting domain...
	I0408 19:19:13.347876  189782 main.go:141] libmachine: (auto-880875) ensuring networks are active...
	I0408 19:19:13.347887  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:13.348699  189782 main.go:141] libmachine: (auto-880875) Ensuring network default is active
	I0408 19:19:13.349258  189782 main.go:141] libmachine: (auto-880875) Ensuring network mk-auto-880875 is active
	I0408 19:19:13.349924  189782 main.go:141] libmachine: (auto-880875) getting domain XML...
	I0408 19:19:13.351010  189782 main.go:141] libmachine: (auto-880875) creating domain...
	I0408 19:19:14.787575  189782 main.go:141] libmachine: (auto-880875) waiting for IP...
	I0408 19:19:14.788437  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:14.790592  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:14.790811  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:14.790553  190071 retry.go:31] will retry after 253.496584ms: waiting for domain to come up
	I0408 19:19:15.351658  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:15.352273  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:15.352346  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:15.352256  190071 retry.go:31] will retry after 276.072891ms: waiting for domain to come up
	I0408 19:19:15.629945  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:15.630487  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:15.630543  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:15.630467  190071 retry.go:31] will retry after 321.137765ms: waiting for domain to come up
	I0408 19:19:15.953033  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:15.953637  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:15.953676  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:15.953616  190071 retry.go:31] will retry after 427.041627ms: waiting for domain to come up
	I0408 19:19:16.382282  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:16.382808  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:16.382833  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:16.382775  190071 retry.go:31] will retry after 552.894992ms: waiting for domain to come up
	I0408 19:19:16.937847  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:16.938467  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:16.938503  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:16.938434  190071 retry.go:31] will retry after 888.987207ms: waiting for domain to come up
	I0408 19:19:15.571005  190277 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 19:19:15.571074  190277 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 19:19:15.571085  190277 cache.go:56] Caching tarball of preloaded images
	I0408 19:19:15.571211  190277 preload.go:172] Found /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 19:19:15.571228  190277 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 19:19:15.571363  190277 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/config.json ...
	I0408 19:19:15.571394  190277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/config.json: {Name:mk1f6800789d702f6ef4ae3332caff0b1b24a463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:19:15.571586  190277 start.go:360] acquireMachinesLock for kindnet-880875: {Name:mk9f7a747fe5c51efa93431b771c455683360918 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 19:19:17.830626  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:17.831177  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:17.831236  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:17.831164  190071 retry.go:31] will retry after 878.527531ms: waiting for domain to come up
	I0408 19:19:18.711433  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:18.711942  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:18.711969  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:18.711904  190071 retry.go:31] will retry after 895.025058ms: waiting for domain to come up
	I0408 19:19:19.608290  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:19.608894  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:19.608928  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:19.608851  190071 retry.go:31] will retry after 1.219015907s: waiting for domain to come up
	I0408 19:19:20.829200  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:20.829663  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:20.829696  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:20.829637  190071 retry.go:31] will retry after 1.880845711s: waiting for domain to come up
	I0408 19:19:22.712750  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:22.713472  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:22.713508  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:22.713417  190071 retry.go:31] will retry after 2.311369799s: waiting for domain to come up
	I0408 19:19:25.027335  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:25.027863  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:25.027888  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:25.027832  190071 retry.go:31] will retry after 3.267323556s: waiting for domain to come up
	I0408 19:19:28.296688  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:28.297117  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:28.297192  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:28.297101  190071 retry.go:31] will retry after 4.1425093s: waiting for domain to come up
	I0408 19:19:32.443074  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:32.443540  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find current IP address of domain auto-880875 in network mk-auto-880875
	I0408 19:19:32.443567  189782 main.go:141] libmachine: (auto-880875) DBG | I0408 19:19:32.443494  190071 retry.go:31] will retry after 4.486468206s: waiting for domain to come up
	I0408 19:19:36.932004  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:36.932563  189782 main.go:141] libmachine: (auto-880875) found domain IP: 192.168.39.229
	I0408 19:19:36.932590  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has current primary IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:36.932596  189782 main.go:141] libmachine: (auto-880875) reserving static IP address...
	I0408 19:19:36.933021  189782 main.go:141] libmachine: (auto-880875) DBG | unable to find host DHCP lease matching {name: "auto-880875", mac: "52:54:00:ba:21:84", ip: "192.168.39.229"} in network mk-auto-880875
	I0408 19:19:37.025743  189782 main.go:141] libmachine: (auto-880875) reserved static IP address 192.168.39.229 for domain auto-880875
	I0408 19:19:37.025773  189782 main.go:141] libmachine: (auto-880875) waiting for SSH...
	I0408 19:19:37.025811  189782 main.go:141] libmachine: (auto-880875) DBG | Getting to WaitForSSH function...
	I0408 19:19:37.029190  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.029715  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:37.029749  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.029921  189782 main.go:141] libmachine: (auto-880875) DBG | Using SSH client type: external
	I0408 19:19:37.029950  189782 main.go:141] libmachine: (auto-880875) DBG | Using SSH private key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875/id_rsa (-rw-------)
	I0408 19:19:37.029982  189782 main.go:141] libmachine: (auto-880875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 19:19:37.030011  189782 main.go:141] libmachine: (auto-880875) DBG | About to run SSH command:
	I0408 19:19:37.030068  189782 main.go:141] libmachine: (auto-880875) DBG | exit 0
	I0408 19:19:37.154343  189782 main.go:141] libmachine: (auto-880875) DBG | SSH cmd err, output: <nil>: 
	I0408 19:19:37.154654  189782 main.go:141] libmachine: (auto-880875) KVM machine creation complete
	I0408 19:19:37.154952  189782 main.go:141] libmachine: (auto-880875) Calling .GetConfigRaw
	I0408 19:19:37.155634  189782 main.go:141] libmachine: (auto-880875) Calling .DriverName
	I0408 19:19:37.155885  189782 main.go:141] libmachine: (auto-880875) Calling .DriverName
	I0408 19:19:37.156175  189782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 19:19:37.156197  189782 main.go:141] libmachine: (auto-880875) Calling .GetState
	I0408 19:19:37.157661  189782 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 19:19:37.157678  189782 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 19:19:37.157701  189782 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 19:19:37.157709  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHHostname
	I0408 19:19:37.160342  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.160869  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:37.160903  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.161040  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHPort
	I0408 19:19:37.161225  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:37.161401  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:37.161587  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHUsername
	I0408 19:19:37.161785  189782 main.go:141] libmachine: Using SSH client type: native
	I0408 19:19:37.162071  189782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0408 19:19:37.162085  189782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 19:19:38.586518  190032 start.go:364] duration metric: took 28.178703199s to acquireMachinesLock for "kubernetes-upgrade-958400"
	I0408 19:19:38.586591  190032 start.go:96] Skipping create...Using existing machine configuration
	I0408 19:19:38.586599  190032 fix.go:54] fixHost starting: 
	I0408 19:19:38.587060  190032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:19:38.587107  190032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:19:38.604784  190032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44509
	I0408 19:19:38.605306  190032 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:19:38.605856  190032 main.go:141] libmachine: Using API Version  1
	I0408 19:19:38.605895  190032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:19:38.606317  190032 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:19:38.606581  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:19:38.606768  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetState
	I0408 19:19:38.608656  190032 fix.go:112] recreateIfNeeded on kubernetes-upgrade-958400: state=Running err=<nil>
	W0408 19:19:38.608678  190032 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 19:19:38.610935  190032 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-958400" VM ...
	I0408 19:19:37.261291  189782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:19:37.261317  189782 main.go:141] libmachine: Detecting the provisioner...
	I0408 19:19:37.261325  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHHostname
	I0408 19:19:37.264550  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.264946  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:37.264972  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.265275  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHPort
	I0408 19:19:37.265533  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:37.265748  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:37.265952  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHUsername
	I0408 19:19:37.266140  189782 main.go:141] libmachine: Using SSH client type: native
	I0408 19:19:37.266462  189782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0408 19:19:37.266483  189782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 19:19:37.371065  189782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 19:19:37.371132  189782 main.go:141] libmachine: found compatible host: buildroot
	I0408 19:19:37.371138  189782 main.go:141] libmachine: Provisioning with buildroot...
	I0408 19:19:37.371146  189782 main.go:141] libmachine: (auto-880875) Calling .GetMachineName
	I0408 19:19:37.371447  189782 buildroot.go:166] provisioning hostname "auto-880875"
	I0408 19:19:37.371480  189782 main.go:141] libmachine: (auto-880875) Calling .GetMachineName
	I0408 19:19:37.371720  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHHostname
	I0408 19:19:37.375074  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.375406  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:37.375434  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.375653  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHPort
	I0408 19:19:37.375847  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:37.376037  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:37.376172  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHUsername
	I0408 19:19:37.376342  189782 main.go:141] libmachine: Using SSH client type: native
	I0408 19:19:37.376564  189782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0408 19:19:37.376576  189782 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-880875 && echo "auto-880875" | sudo tee /etc/hostname
	I0408 19:19:37.492822  189782 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-880875
	
	I0408 19:19:37.492854  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHHostname
	I0408 19:19:37.496131  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.496516  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:37.496549  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.496773  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHPort
	I0408 19:19:37.497025  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:37.497196  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:37.497309  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHUsername
	I0408 19:19:37.497485  189782 main.go:141] libmachine: Using SSH client type: native
	I0408 19:19:37.497777  189782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0408 19:19:37.497801  189782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-880875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-880875/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-880875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 19:19:37.610896  189782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:19:37.610928  189782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20604-141129/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-141129/.minikube}
	I0408 19:19:37.610950  189782 buildroot.go:174] setting up certificates
	I0408 19:19:37.610962  189782 provision.go:84] configureAuth start
	I0408 19:19:37.610978  189782 main.go:141] libmachine: (auto-880875) Calling .GetMachineName
	I0408 19:19:37.611317  189782 main.go:141] libmachine: (auto-880875) Calling .GetIP
	I0408 19:19:37.614291  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.614728  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:37.614758  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.614960  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHHostname
	I0408 19:19:37.617418  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.617771  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:37.617799  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.617968  189782 provision.go:143] copyHostCerts
	I0408 19:19:37.618037  189782 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem, removing ...
	I0408 19:19:37.618063  189782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem
	I0408 19:19:37.618139  189782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem (1679 bytes)
	I0408 19:19:37.618259  189782 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem, removing ...
	I0408 19:19:37.618270  189782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem
	I0408 19:19:37.618303  189782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem (1082 bytes)
	I0408 19:19:37.618382  189782 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem, removing ...
	I0408 19:19:37.618392  189782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem
	I0408 19:19:37.618419  189782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem (1123 bytes)
	I0408 19:19:37.618491  189782 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem org=jenkins.auto-880875 san=[127.0.0.1 192.168.39.229 auto-880875 localhost minikube]
	I0408 19:19:37.951492  189782 provision.go:177] copyRemoteCerts
	I0408 19:19:37.951583  189782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 19:19:37.951615  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHHostname
	I0408 19:19:37.954372  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.954662  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:37.954695  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:37.954810  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHPort
	I0408 19:19:37.955012  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:37.955156  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHUsername
	I0408 19:19:37.955303  189782 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875/id_rsa Username:docker}
	I0408 19:19:38.036631  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 19:19:38.063646  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0408 19:19:38.090447  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 19:19:38.115776  189782 provision.go:87] duration metric: took 504.797287ms to configureAuth
	I0408 19:19:38.115806  189782 buildroot.go:189] setting minikube options for container-runtime
	I0408 19:19:38.116010  189782 config.go:182] Loaded profile config "auto-880875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:19:38.116108  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHHostname
	I0408 19:19:38.119016  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.119400  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:38.119431  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.119643  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHPort
	I0408 19:19:38.119872  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:38.120088  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:38.120222  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHUsername
	I0408 19:19:38.120375  189782 main.go:141] libmachine: Using SSH client type: native
	I0408 19:19:38.120575  189782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0408 19:19:38.120589  189782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 19:19:38.348433  189782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 19:19:38.348463  189782 main.go:141] libmachine: Checking connection to Docker...
	I0408 19:19:38.348473  189782 main.go:141] libmachine: (auto-880875) Calling .GetURL
	I0408 19:19:38.349854  189782 main.go:141] libmachine: (auto-880875) DBG | using libvirt version 6000000
	I0408 19:19:38.352183  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.352530  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:38.352562  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.352725  189782 main.go:141] libmachine: Docker is up and running!
	I0408 19:19:38.352740  189782 main.go:141] libmachine: Reticulating splines...
	I0408 19:19:38.352748  189782 client.go:171] duration metric: took 25.906534017s to LocalClient.Create
	I0408 19:19:38.352778  189782 start.go:167] duration metric: took 25.906627898s to libmachine.API.Create "auto-880875"
	I0408 19:19:38.352792  189782 start.go:293] postStartSetup for "auto-880875" (driver="kvm2")
	I0408 19:19:38.352818  189782 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 19:19:38.352843  189782 main.go:141] libmachine: (auto-880875) Calling .DriverName
	I0408 19:19:38.353155  189782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 19:19:38.353184  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHHostname
	I0408 19:19:38.355300  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.355592  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:38.355625  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.355824  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHPort
	I0408 19:19:38.356032  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:38.356198  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHUsername
	I0408 19:19:38.356339  189782 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875/id_rsa Username:docker}
	I0408 19:19:38.437074  189782 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 19:19:38.441758  189782 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 19:19:38.441790  189782 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/addons for local assets ...
	I0408 19:19:38.441896  189782 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/files for local assets ...
	I0408 19:19:38.442015  189782 filesync.go:149] local asset: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem -> 1484872.pem in /etc/ssl/certs
	I0408 19:19:38.442142  189782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 19:19:38.451944  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:19:38.476578  189782 start.go:296] duration metric: took 123.766332ms for postStartSetup
	I0408 19:19:38.476640  189782 main.go:141] libmachine: (auto-880875) Calling .GetConfigRaw
	I0408 19:19:38.477297  189782 main.go:141] libmachine: (auto-880875) Calling .GetIP
	I0408 19:19:38.480315  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.480653  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:38.480680  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.481040  189782 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/config.json ...
	I0408 19:19:38.481312  189782 start.go:128] duration metric: took 26.058275852s to createHost
	I0408 19:19:38.481347  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHHostname
	I0408 19:19:38.484008  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.484383  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:38.484414  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.484558  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHPort
	I0408 19:19:38.484809  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:38.484985  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:38.485143  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHUsername
	I0408 19:19:38.485316  189782 main.go:141] libmachine: Using SSH client type: native
	I0408 19:19:38.485553  189782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0408 19:19:38.485564  189782 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 19:19:38.586347  189782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744139978.563859722
	
	I0408 19:19:38.586373  189782 fix.go:216] guest clock: 1744139978.563859722
	I0408 19:19:38.586385  189782 fix.go:229] Guest: 2025-04-08 19:19:38.563859722 +0000 UTC Remote: 2025-04-08 19:19:38.481329968 +0000 UTC m=+46.268857030 (delta=82.529754ms)
	I0408 19:19:38.586414  189782 fix.go:200] guest clock delta is within tolerance: 82.529754ms
	I0408 19:19:38.586421  189782 start.go:83] releasing machines lock for "auto-880875", held for 26.16355053s
	I0408 19:19:38.586445  189782 main.go:141] libmachine: (auto-880875) Calling .DriverName
	I0408 19:19:38.586729  189782 main.go:141] libmachine: (auto-880875) Calling .GetIP
	I0408 19:19:38.589820  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.590194  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:38.590222  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.590456  189782 main.go:141] libmachine: (auto-880875) Calling .DriverName
	I0408 19:19:38.591020  189782 main.go:141] libmachine: (auto-880875) Calling .DriverName
	I0408 19:19:38.591192  189782 main.go:141] libmachine: (auto-880875) Calling .DriverName
	I0408 19:19:38.591282  189782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 19:19:38.591333  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHHostname
	I0408 19:19:38.591442  189782 ssh_runner.go:195] Run: cat /version.json
	I0408 19:19:38.591471  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHHostname
	I0408 19:19:38.594237  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.594321  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.594717  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:38.594750  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:38.594770  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.594785  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:38.594947  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHPort
	I0408 19:19:38.595119  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHPort
	I0408 19:19:38.595145  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:38.595288  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHKeyPath
	I0408 19:19:38.595300  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHUsername
	I0408 19:19:38.595429  189782 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875/id_rsa Username:docker}
	I0408 19:19:38.595465  189782 main.go:141] libmachine: (auto-880875) Calling .GetSSHUsername
	I0408 19:19:38.595622  189782 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/auto-880875/id_rsa Username:docker}
	I0408 19:19:38.674774  189782 ssh_runner.go:195] Run: systemctl --version
	I0408 19:19:38.700172  189782 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 19:19:38.860898  189782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 19:19:38.867297  189782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 19:19:38.867363  189782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 19:19:38.885396  189782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 19:19:38.885421  189782 start.go:495] detecting cgroup driver to use...
	I0408 19:19:38.885490  189782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 19:19:38.907747  189782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 19:19:38.926063  189782 docker.go:217] disabling cri-docker service (if available) ...
	I0408 19:19:38.926128  189782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 19:19:38.940260  189782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 19:19:38.955304  189782 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 19:19:39.084762  189782 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 19:19:39.251278  189782 docker.go:233] disabling docker service ...
	I0408 19:19:39.251358  189782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 19:19:39.268612  189782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 19:19:39.283950  189782 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 19:19:39.401531  189782 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 19:19:39.541743  189782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 19:19:39.558000  189782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 19:19:39.579565  189782 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 19:19:39.579651  189782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:39.591369  189782 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 19:19:39.591459  189782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:39.602565  189782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:39.613215  189782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:39.623868  189782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 19:19:39.635639  189782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:39.646915  189782 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:39.668621  189782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:39.680610  189782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 19:19:39.691535  189782 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 19:19:39.691602  189782 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 19:19:39.707336  189782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 19:19:39.718448  189782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:19:39.839501  189782 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 19:19:39.947668  189782 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 19:19:39.947752  189782 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 19:19:39.953062  189782 start.go:563] Will wait 60s for crictl version
	I0408 19:19:39.953134  189782 ssh_runner.go:195] Run: which crictl
	I0408 19:19:39.957373  189782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 19:19:39.996879  189782 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 19:19:39.996985  189782 ssh_runner.go:195] Run: crio --version
	I0408 19:19:40.030787  189782 ssh_runner.go:195] Run: crio --version
	I0408 19:19:40.065296  189782 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 19:19:38.612433  190032 machine.go:93] provisionDockerMachine start ...
	I0408 19:19:38.612474  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:19:38.612725  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:19:38.615513  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:38.616069  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:38.616093  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:38.616323  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:19:38.616498  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:38.616663  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:38.616768  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:19:38.617015  190032 main.go:141] libmachine: Using SSH client type: native
	I0408 19:19:38.617330  190032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0408 19:19:38.617350  190032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 19:19:38.735172  190032 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-958400
	
	I0408 19:19:38.735207  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetMachineName
	I0408 19:19:38.735469  190032 buildroot.go:166] provisioning hostname "kubernetes-upgrade-958400"
	I0408 19:19:38.735511  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetMachineName
	I0408 19:19:38.735724  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:19:38.739117  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:38.739574  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:38.739610  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:38.739783  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:19:38.740014  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:38.740215  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:38.740399  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:19:38.740571  190032 main.go:141] libmachine: Using SSH client type: native
	I0408 19:19:38.740858  190032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0408 19:19:38.740874  190032 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-958400 && echo "kubernetes-upgrade-958400" | sudo tee /etc/hostname
	I0408 19:19:38.868593  190032 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-958400
	
	I0408 19:19:38.868635  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:19:38.871680  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:38.871986  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:38.872026  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:38.872330  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:19:38.872507  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:38.872659  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:38.872758  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:19:38.872976  190032 main.go:141] libmachine: Using SSH client type: native
	I0408 19:19:38.873268  190032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0408 19:19:38.873298  190032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-958400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-958400/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-958400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 19:19:38.987881  190032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:19:38.987937  190032 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20604-141129/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-141129/.minikube}
	I0408 19:19:38.987970  190032 buildroot.go:174] setting up certificates
	I0408 19:19:38.987988  190032 provision.go:84] configureAuth start
	I0408 19:19:38.988003  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetMachineName
	I0408 19:19:38.988418  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetIP
	I0408 19:19:38.991717  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:38.992138  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:38.992194  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:38.992607  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:19:38.995649  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:38.996061  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:38.996115  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:38.996306  190032 provision.go:143] copyHostCerts
	I0408 19:19:38.996371  190032 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem, removing ...
	I0408 19:19:38.996389  190032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem
	I0408 19:19:38.996464  190032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem (1679 bytes)
	I0408 19:19:38.996583  190032 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem, removing ...
	I0408 19:19:38.996595  190032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem
	I0408 19:19:38.996623  190032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem (1082 bytes)
	I0408 19:19:38.996695  190032 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem, removing ...
	I0408 19:19:38.996704  190032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem
	I0408 19:19:38.996728  190032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem (1123 bytes)
	I0408 19:19:38.996802  190032 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-958400 san=[127.0.0.1 192.168.50.182 kubernetes-upgrade-958400 localhost minikube]
	I0408 19:19:39.128371  190032 provision.go:177] copyRemoteCerts
	I0408 19:19:39.128431  190032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 19:19:39.128457  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:19:39.131402  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:39.131826  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:39.131853  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:39.132082  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:19:39.132380  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:39.132620  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:19:39.132815  190032 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/id_rsa Username:docker}
	I0408 19:19:39.221092  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 19:19:39.250179  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0408 19:19:39.275507  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 19:19:39.303126  190032 provision.go:87] duration metric: took 315.119958ms to configureAuth
	I0408 19:19:39.303162  190032 buildroot.go:189] setting minikube options for container-runtime
	I0408 19:19:39.303384  190032 config.go:182] Loaded profile config "kubernetes-upgrade-958400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:19:39.303483  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:19:39.306762  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:39.307213  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:39.307248  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:39.307435  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:19:39.307645  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:39.307802  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:39.307931  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:19:39.308128  190032 main.go:141] libmachine: Using SSH client type: native
	I0408 19:19:39.308421  190032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0408 19:19:39.308448  190032 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 19:19:40.066844  189782 main.go:141] libmachine: (auto-880875) Calling .GetIP
	I0408 19:19:40.070279  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:40.070598  189782 main.go:141] libmachine: (auto-880875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:21:84", ip: ""} in network mk-auto-880875: {Iface:virbr1 ExpiryTime:2025-04-08 20:19:27 +0000 UTC Type:0 Mac:52:54:00:ba:21:84 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:auto-880875 Clientid:01:52:54:00:ba:21:84}
	I0408 19:19:40.070626  189782 main.go:141] libmachine: (auto-880875) DBG | domain auto-880875 has defined IP address 192.168.39.229 and MAC address 52:54:00:ba:21:84 in network mk-auto-880875
	I0408 19:19:40.070843  189782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 19:19:40.075256  189782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:19:40.088559  189782 kubeadm.go:883] updating cluster {Name:auto-880875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:auto-880875 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 19:19:40.088682  189782 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 19:19:40.088746  189782 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:19:40.121212  189782 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0408 19:19:40.121292  189782 ssh_runner.go:195] Run: which lz4
	I0408 19:19:40.125391  189782 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 19:19:40.129882  189782 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 19:19:40.129930  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0408 19:19:41.511600  189782 crio.go:462] duration metric: took 1.386259564s to copy over tarball
	I0408 19:19:41.511699  189782 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 19:19:47.186730  190277 start.go:364] duration metric: took 31.615077225s to acquireMachinesLock for "kindnet-880875"
	I0408 19:19:47.186815  190277 start.go:93] Provisioning new machine with config: &{Name:kindnet-880875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterN
ame:kindnet-880875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 19:19:47.186988  190277 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 19:19:43.945097  189782 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.433361059s)
	I0408 19:19:43.945131  189782 crio.go:469] duration metric: took 2.43349208s to extract the tarball
	I0408 19:19:43.945141  189782 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 19:19:43.992224  189782 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:19:44.034417  189782 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 19:19:44.034443  189782 cache_images.go:84] Images are preloaded, skipping loading
	I0408 19:19:44.034452  189782 kubeadm.go:934] updating node { 192.168.39.229 8443 v1.32.2 crio true true} ...
	I0408 19:19:44.034556  189782 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-880875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:auto-880875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 19:19:44.034620  189782 ssh_runner.go:195] Run: crio config
	I0408 19:19:44.087115  189782 cni.go:84] Creating CNI manager for ""
	I0408 19:19:44.087137  189782 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:19:44.087152  189782 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 19:19:44.087182  189782 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.229 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-880875 NodeName:auto-880875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 19:19:44.087350  189782 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-880875"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.229"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 19:19:44.087433  189782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 19:19:44.098735  189782 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 19:19:44.098833  189782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 19:19:44.108957  189782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0408 19:19:44.128714  189782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 19:19:44.148056  189782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0408 19:19:44.167058  189782 ssh_runner.go:195] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0408 19:19:44.171115  189782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:19:44.184669  189782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:19:44.315172  189782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:19:44.333857  189782 certs.go:68] Setting up /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875 for IP: 192.168.39.229
	I0408 19:19:44.333887  189782 certs.go:194] generating shared ca certs ...
	I0408 19:19:44.333910  189782 certs.go:226] acquiring lock for ca certs: {Name:mkd37ce74a5e6f5f5300314397402f7d571fc230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:19:44.334117  189782 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key
	I0408 19:19:44.334156  189782 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key
	I0408 19:19:44.334166  189782 certs.go:256] generating profile certs ...
	I0408 19:19:44.334224  189782 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.key
	I0408 19:19:44.334241  189782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt with IP's: []
	I0408 19:19:44.591811  189782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt ...
	I0408 19:19:44.591855  189782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: {Name:mkbe25f525622c05f6ffbc137b1edd13712398f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:19:44.592116  189782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.key ...
	I0408 19:19:44.592136  189782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.key: {Name:mkabfbd44c81c6ca8defa9a8575eb960b3fa0c6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:19:44.592265  189782 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/apiserver.key.6649a4d2
	I0408 19:19:44.592293  189782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/apiserver.crt.6649a4d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.229]
	I0408 19:19:44.646220  189782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/apiserver.crt.6649a4d2 ...
	I0408 19:19:44.646257  189782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/apiserver.crt.6649a4d2: {Name:mk54d629dcec508abff9650a3ddf352067c8b52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:19:44.646438  189782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/apiserver.key.6649a4d2 ...
	I0408 19:19:44.646452  189782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/apiserver.key.6649a4d2: {Name:mk82cfcbc3ea87e64ba1fe7f5b1f6ee3db05371a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:19:44.646522  189782 certs.go:381] copying /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/apiserver.crt.6649a4d2 -> /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/apiserver.crt
	I0408 19:19:44.646603  189782 certs.go:385] copying /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/apiserver.key.6649a4d2 -> /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/apiserver.key
	I0408 19:19:44.646657  189782 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/proxy-client.key
	I0408 19:19:44.646676  189782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/proxy-client.crt with IP's: []
	I0408 19:19:45.035204  189782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/proxy-client.crt ...
	I0408 19:19:45.035236  189782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/proxy-client.crt: {Name:mk02bb0da953cf829d14bca4b2d32b26590d79d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:19:45.035427  189782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/proxy-client.key ...
	I0408 19:19:45.035439  189782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/proxy-client.key: {Name:mk2f1e1ecdc4310021a693fcbc6328069ab271a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:19:45.035603  189782 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem (1338 bytes)
	W0408 19:19:45.035639  189782 certs.go:480] ignoring /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487_empty.pem, impossibly tiny 0 bytes
	I0408 19:19:45.035649  189782 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 19:19:45.035675  189782 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem (1082 bytes)
	I0408 19:19:45.035701  189782 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem (1123 bytes)
	I0408 19:19:45.035720  189782 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem (1679 bytes)
	I0408 19:19:45.035756  189782 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:19:45.036402  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 19:19:45.074537  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 19:19:45.110629  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 19:19:45.139403  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 19:19:45.166771  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0408 19:19:45.194058  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 19:19:45.219398  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 19:19:45.271162  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 19:19:45.296659  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 19:19:45.321618  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem --> /usr/share/ca-certificates/148487.pem (1338 bytes)
	I0408 19:19:45.347073  189782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /usr/share/ca-certificates/1484872.pem (1708 bytes)
	I0408 19:19:45.374487  189782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 19:19:45.391748  189782 ssh_runner.go:195] Run: openssl version
	I0408 19:19:45.398005  189782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148487.pem && ln -fs /usr/share/ca-certificates/148487.pem /etc/ssl/certs/148487.pem"
	I0408 19:19:45.409014  189782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148487.pem
	I0408 19:19:45.413703  189782 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 18:21 /usr/share/ca-certificates/148487.pem
	I0408 19:19:45.413778  189782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148487.pem
	I0408 19:19:45.420371  189782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/148487.pem /etc/ssl/certs/51391683.0"
	I0408 19:19:45.431275  189782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1484872.pem && ln -fs /usr/share/ca-certificates/1484872.pem /etc/ssl/certs/1484872.pem"
	I0408 19:19:45.442453  189782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1484872.pem
	I0408 19:19:45.447118  189782 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 18:21 /usr/share/ca-certificates/1484872.pem
	I0408 19:19:45.447196  189782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1484872.pem
	I0408 19:19:45.453383  189782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1484872.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 19:19:45.464362  189782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 19:19:45.475709  189782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:19:45.480478  189782 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:19:45.480561  189782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:19:45.486511  189782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 19:19:45.497696  189782 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 19:19:45.502366  189782 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 19:19:45.502442  189782 kubeadm.go:392] StartCluster: {Name:auto-880875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:auto-880875 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:19:45.502577  189782 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 19:19:45.502638  189782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:19:45.545537  189782 cri.go:89] found id: ""
	I0408 19:19:45.545625  189782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 19:19:45.555421  189782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 19:19:45.564930  189782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:19:45.574351  189782 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:19:45.574374  189782 kubeadm.go:157] found existing configuration files:
	
	I0408 19:19:45.574423  189782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:19:45.583701  189782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:19:45.583766  189782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:19:45.593125  189782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:19:45.601743  189782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:19:45.601820  189782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:19:45.611136  189782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:19:45.620078  189782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:19:45.620148  189782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:19:45.629387  189782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:19:45.639105  189782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:19:45.639190  189782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:19:45.648785  189782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 19:19:45.810813  189782 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:19:46.933261  190032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 19:19:46.933308  190032 machine.go:96] duration metric: took 8.320848303s to provisionDockerMachine
	I0408 19:19:46.933327  190032 start.go:293] postStartSetup for "kubernetes-upgrade-958400" (driver="kvm2")
	I0408 19:19:46.933343  190032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 19:19:46.933375  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:19:46.933768  190032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 19:19:46.933883  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:19:46.937025  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:46.937441  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:46.937471  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:46.937655  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:19:46.937888  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:46.938100  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:19:46.938287  190032 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/id_rsa Username:docker}
	I0408 19:19:47.028777  190032 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 19:19:47.033091  190032 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 19:19:47.033127  190032 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/addons for local assets ...
	I0408 19:19:47.033202  190032 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/files for local assets ...
	I0408 19:19:47.033310  190032 filesync.go:149] local asset: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem -> 1484872.pem in /etc/ssl/certs
	I0408 19:19:47.033435  190032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 19:19:47.044463  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:19:47.069672  190032 start.go:296] duration metric: took 136.326239ms for postStartSetup
	I0408 19:19:47.069722  190032 fix.go:56] duration metric: took 8.483122466s for fixHost
	I0408 19:19:47.069747  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:19:47.072709  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:47.073039  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:47.073068  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:47.073278  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:19:47.073467  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:47.073616  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:47.073730  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:19:47.073931  190032 main.go:141] libmachine: Using SSH client type: native
	I0408 19:19:47.074148  190032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0408 19:19:47.074159  190032 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 19:19:47.186516  190032 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744139987.181648380
	
	I0408 19:19:47.186543  190032 fix.go:216] guest clock: 1744139987.181648380
	I0408 19:19:47.186554  190032 fix.go:229] Guest: 2025-04-08 19:19:47.18164838 +0000 UTC Remote: 2025-04-08 19:19:47.069727403 +0000 UTC m=+36.817758554 (delta=111.920977ms)
	I0408 19:19:47.186584  190032 fix.go:200] guest clock delta is within tolerance: 111.920977ms
	I0408 19:19:47.186592  190032 start.go:83] releasing machines lock for "kubernetes-upgrade-958400", held for 8.600028655s
	I0408 19:19:47.186629  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:19:47.186958  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetIP
	I0408 19:19:47.190384  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:47.190880  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:47.190910  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:47.191142  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:19:47.191786  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:19:47.191970  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .DriverName
	I0408 19:19:47.192089  190032 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 19:19:47.192180  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:19:47.192184  190032 ssh_runner.go:195] Run: cat /version.json
	I0408 19:19:47.192298  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHHostname
	I0408 19:19:47.195478  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:47.195882  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:47.195949  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:47.195973  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:47.196163  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:19:47.196356  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:47.196357  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:47.196426  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:47.196522  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:19:47.196725  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHPort
	I0408 19:19:47.196737  190032 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/id_rsa Username:docker}
	I0408 19:19:47.196890  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHKeyPath
	I0408 19:19:47.197051  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetSSHUsername
	I0408 19:19:47.197195  190032 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/kubernetes-upgrade-958400/id_rsa Username:docker}
	I0408 19:19:47.301053  190032 ssh_runner.go:195] Run: systemctl --version
	I0408 19:19:47.309075  190032 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 19:19:47.467687  190032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 19:19:47.476955  190032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 19:19:47.477057  190032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 19:19:47.487331  190032 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 19:19:47.487370  190032 start.go:495] detecting cgroup driver to use...
	I0408 19:19:47.487446  190032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 19:19:47.507395  190032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 19:19:47.521910  190032 docker.go:217] disabling cri-docker service (if available) ...
	I0408 19:19:47.521986  190032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 19:19:47.536858  190032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 19:19:47.551926  190032 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 19:19:47.755108  190032 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 19:19:48.080025  190032 docker.go:233] disabling docker service ...
	I0408 19:19:48.080127  190032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 19:19:48.239191  190032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 19:19:48.326886  190032 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 19:19:48.713624  190032 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 19:19:49.072149  190032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 19:19:49.137175  190032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 19:19:49.214750  190032 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 19:19:49.214874  190032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:49.258784  190032 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 19:19:49.258869  190032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:49.287566  190032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:49.315888  190032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:49.365121  190032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 19:19:49.407846  190032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:49.460757  190032 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:49.503819  190032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:19:49.539871  190032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 19:19:49.566299  190032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 19:19:49.587495  190032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:19:49.868868  190032 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 19:19:47.188731  190277 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 19:19:47.188988  190277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:19:47.189036  190277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:19:47.207960  190277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38069
	I0408 19:19:47.208579  190277 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:19:47.209310  190277 main.go:141] libmachine: Using API Version  1
	I0408 19:19:47.209344  190277 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:19:47.209737  190277 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:19:47.209965  190277 main.go:141] libmachine: (kindnet-880875) Calling .GetMachineName
	I0408 19:19:47.210091  190277 main.go:141] libmachine: (kindnet-880875) Calling .DriverName
	I0408 19:19:47.210277  190277 start.go:159] libmachine.API.Create for "kindnet-880875" (driver="kvm2")
	I0408 19:19:47.210341  190277 client.go:168] LocalClient.Create starting
	I0408 19:19:47.210381  190277 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem
	I0408 19:19:47.210427  190277 main.go:141] libmachine: Decoding PEM data...
	I0408 19:19:47.210448  190277 main.go:141] libmachine: Parsing certificate...
	I0408 19:19:47.210556  190277 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem
	I0408 19:19:47.210596  190277 main.go:141] libmachine: Decoding PEM data...
	I0408 19:19:47.210613  190277 main.go:141] libmachine: Parsing certificate...
	I0408 19:19:47.210654  190277 main.go:141] libmachine: Running pre-create checks...
	I0408 19:19:47.210667  190277 main.go:141] libmachine: (kindnet-880875) Calling .PreCreateCheck
	I0408 19:19:47.210993  190277 main.go:141] libmachine: (kindnet-880875) Calling .GetConfigRaw
	I0408 19:19:47.211428  190277 main.go:141] libmachine: Creating machine...
	I0408 19:19:47.211446  190277 main.go:141] libmachine: (kindnet-880875) Calling .Create
	I0408 19:19:47.211659  190277 main.go:141] libmachine: (kindnet-880875) creating KVM machine...
	I0408 19:19:47.211739  190277 main.go:141] libmachine: (kindnet-880875) creating network...
	I0408 19:19:47.213333  190277 main.go:141] libmachine: (kindnet-880875) DBG | found existing default KVM network
	I0408 19:19:47.214778  190277 main.go:141] libmachine: (kindnet-880875) DBG | I0408 19:19:47.214594  190482 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:17:72:f5} reservation:<nil>}
	I0408 19:19:47.215541  190277 main.go:141] libmachine: (kindnet-880875) DBG | I0408 19:19:47.215437  190482 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:0c:ec:ba} reservation:<nil>}
	I0408 19:19:47.216283  190277 main.go:141] libmachine: (kindnet-880875) DBG | I0408 19:19:47.216192  190482 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:dc:ac:b1} reservation:<nil>}
	I0408 19:19:47.217497  190277 main.go:141] libmachine: (kindnet-880875) DBG | I0408 19:19:47.217401  190482 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00035e9b0}
	I0408 19:19:47.217533  190277 main.go:141] libmachine: (kindnet-880875) DBG | created network xml: 
	I0408 19:19:47.217548  190277 main.go:141] libmachine: (kindnet-880875) DBG | <network>
	I0408 19:19:47.217560  190277 main.go:141] libmachine: (kindnet-880875) DBG |   <name>mk-kindnet-880875</name>
	I0408 19:19:47.217568  190277 main.go:141] libmachine: (kindnet-880875) DBG |   <dns enable='no'/>
	I0408 19:19:47.217577  190277 main.go:141] libmachine: (kindnet-880875) DBG |   
	I0408 19:19:47.217591  190277 main.go:141] libmachine: (kindnet-880875) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0408 19:19:47.217607  190277 main.go:141] libmachine: (kindnet-880875) DBG |     <dhcp>
	I0408 19:19:47.217617  190277 main.go:141] libmachine: (kindnet-880875) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0408 19:19:47.217627  190277 main.go:141] libmachine: (kindnet-880875) DBG |     </dhcp>
	I0408 19:19:47.217636  190277 main.go:141] libmachine: (kindnet-880875) DBG |   </ip>
	I0408 19:19:47.217645  190277 main.go:141] libmachine: (kindnet-880875) DBG |   
	I0408 19:19:47.217665  190277 main.go:141] libmachine: (kindnet-880875) DBG | </network>
	I0408 19:19:47.217685  190277 main.go:141] libmachine: (kindnet-880875) DBG | 
	I0408 19:19:47.223814  190277 main.go:141] libmachine: (kindnet-880875) DBG | trying to create private KVM network mk-kindnet-880875 192.168.72.0/24...
	I0408 19:19:47.307514  190277 main.go:141] libmachine: (kindnet-880875) DBG | private KVM network mk-kindnet-880875 192.168.72.0/24 created
	I0408 19:19:47.307552  190277 main.go:141] libmachine: (kindnet-880875) setting up store path in /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kindnet-880875 ...
	I0408 19:19:47.307571  190277 main.go:141] libmachine: (kindnet-880875) DBG | I0408 19:19:47.307497  190482 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:19:47.307595  190277 main.go:141] libmachine: (kindnet-880875) building disk image from file:///home/jenkins/minikube-integration/20604-141129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0408 19:19:47.307718  190277 main.go:141] libmachine: (kindnet-880875) Downloading /home/jenkins/minikube-integration/20604-141129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20604-141129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0408 19:19:47.585622  190277 main.go:141] libmachine: (kindnet-880875) DBG | I0408 19:19:47.585471  190482 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kindnet-880875/id_rsa...
	I0408 19:19:47.700046  190277 main.go:141] libmachine: (kindnet-880875) DBG | I0408 19:19:47.699857  190482 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kindnet-880875/kindnet-880875.rawdisk...
	I0408 19:19:47.700085  190277 main.go:141] libmachine: (kindnet-880875) DBG | Writing magic tar header
	I0408 19:19:47.700104  190277 main.go:141] libmachine: (kindnet-880875) DBG | Writing SSH key tar header
	I0408 19:19:47.700138  190277 main.go:141] libmachine: (kindnet-880875) DBG | I0408 19:19:47.700017  190482 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kindnet-880875 ...
	I0408 19:19:47.700160  190277 main.go:141] libmachine: (kindnet-880875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kindnet-880875
	I0408 19:19:47.700193  190277 main.go:141] libmachine: (kindnet-880875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube/machines
	I0408 19:19:47.700208  190277 main.go:141] libmachine: (kindnet-880875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:19:47.700229  190277 main.go:141] libmachine: (kindnet-880875) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube/machines/kindnet-880875 (perms=drwx------)
	I0408 19:19:47.700293  190277 main.go:141] libmachine: (kindnet-880875) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube/machines (perms=drwxr-xr-x)
	I0408 19:19:47.700320  190277 main.go:141] libmachine: (kindnet-880875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129
	I0408 19:19:47.700331  190277 main.go:141] libmachine: (kindnet-880875) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube (perms=drwxr-xr-x)
	I0408 19:19:47.700344  190277 main.go:141] libmachine: (kindnet-880875) setting executable bit set on /home/jenkins/minikube-integration/20604-141129 (perms=drwxrwxr-x)
	I0408 19:19:47.700363  190277 main.go:141] libmachine: (kindnet-880875) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 19:19:47.700374  190277 main.go:141] libmachine: (kindnet-880875) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0408 19:19:47.700390  190277 main.go:141] libmachine: (kindnet-880875) DBG | checking permissions on dir: /home/jenkins
	I0408 19:19:47.700400  190277 main.go:141] libmachine: (kindnet-880875) DBG | checking permissions on dir: /home
	I0408 19:19:47.700414  190277 main.go:141] libmachine: (kindnet-880875) DBG | skipping /home - not owner
	I0408 19:19:47.700429  190277 main.go:141] libmachine: (kindnet-880875) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 19:19:47.700438  190277 main.go:141] libmachine: (kindnet-880875) creating domain...
	I0408 19:19:47.701930  190277 main.go:141] libmachine: (kindnet-880875) define libvirt domain using xml: 
	I0408 19:19:47.701966  190277 main.go:141] libmachine: (kindnet-880875) <domain type='kvm'>
	I0408 19:19:47.701977  190277 main.go:141] libmachine: (kindnet-880875)   <name>kindnet-880875</name>
	I0408 19:19:47.701984  190277 main.go:141] libmachine: (kindnet-880875)   <memory unit='MiB'>3072</memory>
	I0408 19:19:47.702013  190277 main.go:141] libmachine: (kindnet-880875)   <vcpu>2</vcpu>
	I0408 19:19:47.702027  190277 main.go:141] libmachine: (kindnet-880875)   <features>
	I0408 19:19:47.702035  190277 main.go:141] libmachine: (kindnet-880875)     <acpi/>
	I0408 19:19:47.702042  190277 main.go:141] libmachine: (kindnet-880875)     <apic/>
	I0408 19:19:47.702087  190277 main.go:141] libmachine: (kindnet-880875)     <pae/>
	I0408 19:19:47.702110  190277 main.go:141] libmachine: (kindnet-880875)     
	I0408 19:19:47.702120  190277 main.go:141] libmachine: (kindnet-880875)   </features>
	I0408 19:19:47.702129  190277 main.go:141] libmachine: (kindnet-880875)   <cpu mode='host-passthrough'>
	I0408 19:19:47.702140  190277 main.go:141] libmachine: (kindnet-880875)   
	I0408 19:19:47.702148  190277 main.go:141] libmachine: (kindnet-880875)   </cpu>
	I0408 19:19:47.702173  190277 main.go:141] libmachine: (kindnet-880875)   <os>
	I0408 19:19:47.702184  190277 main.go:141] libmachine: (kindnet-880875)     <type>hvm</type>
	I0408 19:19:47.702191  190277 main.go:141] libmachine: (kindnet-880875)     <boot dev='cdrom'/>
	I0408 19:19:47.702217  190277 main.go:141] libmachine: (kindnet-880875)     <boot dev='hd'/>
	I0408 19:19:47.702229  190277 main.go:141] libmachine: (kindnet-880875)     <bootmenu enable='no'/>
	I0408 19:19:47.702235  190277 main.go:141] libmachine: (kindnet-880875)   </os>
	I0408 19:19:47.702247  190277 main.go:141] libmachine: (kindnet-880875)   <devices>
	I0408 19:19:47.702255  190277 main.go:141] libmachine: (kindnet-880875)     <disk type='file' device='cdrom'>
	I0408 19:19:47.702269  190277 main.go:141] libmachine: (kindnet-880875)       <source file='/home/jenkins/minikube-integration/20604-141129/.minikube/machines/kindnet-880875/boot2docker.iso'/>
	I0408 19:19:47.702280  190277 main.go:141] libmachine: (kindnet-880875)       <target dev='hdc' bus='scsi'/>
	I0408 19:19:47.702293  190277 main.go:141] libmachine: (kindnet-880875)       <readonly/>
	I0408 19:19:47.702302  190277 main.go:141] libmachine: (kindnet-880875)     </disk>
	I0408 19:19:47.702315  190277 main.go:141] libmachine: (kindnet-880875)     <disk type='file' device='disk'>
	I0408 19:19:47.702327  190277 main.go:141] libmachine: (kindnet-880875)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 19:19:47.702343  190277 main.go:141] libmachine: (kindnet-880875)       <source file='/home/jenkins/minikube-integration/20604-141129/.minikube/machines/kindnet-880875/kindnet-880875.rawdisk'/>
	I0408 19:19:47.702354  190277 main.go:141] libmachine: (kindnet-880875)       <target dev='hda' bus='virtio'/>
	I0408 19:19:47.702364  190277 main.go:141] libmachine: (kindnet-880875)     </disk>
	I0408 19:19:47.702371  190277 main.go:141] libmachine: (kindnet-880875)     <interface type='network'>
	I0408 19:19:47.702382  190277 main.go:141] libmachine: (kindnet-880875)       <source network='mk-kindnet-880875'/>
	I0408 19:19:47.702397  190277 main.go:141] libmachine: (kindnet-880875)       <model type='virtio'/>
	I0408 19:19:47.702408  190277 main.go:141] libmachine: (kindnet-880875)     </interface>
	I0408 19:19:47.702414  190277 main.go:141] libmachine: (kindnet-880875)     <interface type='network'>
	I0408 19:19:47.702425  190277 main.go:141] libmachine: (kindnet-880875)       <source network='default'/>
	I0408 19:19:47.702433  190277 main.go:141] libmachine: (kindnet-880875)       <model type='virtio'/>
	I0408 19:19:47.702444  190277 main.go:141] libmachine: (kindnet-880875)     </interface>
	I0408 19:19:47.702451  190277 main.go:141] libmachine: (kindnet-880875)     <serial type='pty'>
	I0408 19:19:47.702459  190277 main.go:141] libmachine: (kindnet-880875)       <target port='0'/>
	I0408 19:19:47.702465  190277 main.go:141] libmachine: (kindnet-880875)     </serial>
	I0408 19:19:47.702473  190277 main.go:141] libmachine: (kindnet-880875)     <console type='pty'>
	I0408 19:19:47.702484  190277 main.go:141] libmachine: (kindnet-880875)       <target type='serial' port='0'/>
	I0408 19:19:47.702492  190277 main.go:141] libmachine: (kindnet-880875)     </console>
	I0408 19:19:47.702502  190277 main.go:141] libmachine: (kindnet-880875)     <rng model='virtio'>
	I0408 19:19:47.702511  190277 main.go:141] libmachine: (kindnet-880875)       <backend model='random'>/dev/random</backend>
	I0408 19:19:47.702533  190277 main.go:141] libmachine: (kindnet-880875)     </rng>
	I0408 19:19:47.702544  190277 main.go:141] libmachine: (kindnet-880875)     
	I0408 19:19:47.702550  190277 main.go:141] libmachine: (kindnet-880875)     
	I0408 19:19:47.702557  190277 main.go:141] libmachine: (kindnet-880875)   </devices>
	I0408 19:19:47.702563  190277 main.go:141] libmachine: (kindnet-880875) </domain>
	I0408 19:19:47.702578  190277 main.go:141] libmachine: (kindnet-880875) 
	I0408 19:19:47.707758  190277 main.go:141] libmachine: (kindnet-880875) DBG | domain kindnet-880875 has defined MAC address 52:54:00:23:19:dd in network default
	I0408 19:19:47.708588  190277 main.go:141] libmachine: (kindnet-880875) starting domain...
	I0408 19:19:47.708613  190277 main.go:141] libmachine: (kindnet-880875) ensuring networks are active...
	I0408 19:19:47.708625  190277 main.go:141] libmachine: (kindnet-880875) DBG | domain kindnet-880875 has defined MAC address 52:54:00:03:41:95 in network mk-kindnet-880875
	I0408 19:19:47.709515  190277 main.go:141] libmachine: (kindnet-880875) Ensuring network default is active
	I0408 19:19:47.709844  190277 main.go:141] libmachine: (kindnet-880875) Ensuring network mk-kindnet-880875 is active
	I0408 19:19:47.710467  190277 main.go:141] libmachine: (kindnet-880875) getting domain XML...
	I0408 19:19:47.711406  190277 main.go:141] libmachine: (kindnet-880875) creating domain...
	I0408 19:19:49.168615  190277 main.go:141] libmachine: (kindnet-880875) waiting for IP...
	I0408 19:19:49.169567  190277 main.go:141] libmachine: (kindnet-880875) DBG | domain kindnet-880875 has defined MAC address 52:54:00:03:41:95 in network mk-kindnet-880875
	I0408 19:19:49.170193  190277 main.go:141] libmachine: (kindnet-880875) DBG | unable to find current IP address of domain kindnet-880875 in network mk-kindnet-880875
	I0408 19:19:49.170236  190277 main.go:141] libmachine: (kindnet-880875) DBG | I0408 19:19:49.170172  190482 retry.go:31] will retry after 208.234057ms: waiting for domain to come up
	I0408 19:19:49.379968  190277 main.go:141] libmachine: (kindnet-880875) DBG | domain kindnet-880875 has defined MAC address 52:54:00:03:41:95 in network mk-kindnet-880875
	I0408 19:19:49.380594  190277 main.go:141] libmachine: (kindnet-880875) DBG | unable to find current IP address of domain kindnet-880875 in network mk-kindnet-880875
	I0408 19:19:49.380620  190277 main.go:141] libmachine: (kindnet-880875) DBG | I0408 19:19:49.380554  190482 retry.go:31] will retry after 366.739781ms: waiting for domain to come up
	I0408 19:19:49.749212  190277 main.go:141] libmachine: (kindnet-880875) DBG | domain kindnet-880875 has defined MAC address 52:54:00:03:41:95 in network mk-kindnet-880875
	I0408 19:19:49.750001  190277 main.go:141] libmachine: (kindnet-880875) DBG | unable to find current IP address of domain kindnet-880875 in network mk-kindnet-880875
	I0408 19:19:49.750030  190277 main.go:141] libmachine: (kindnet-880875) DBG | I0408 19:19:49.749972  190482 retry.go:31] will retry after 346.54559ms: waiting for domain to come up
	I0408 19:19:50.098792  190277 main.go:141] libmachine: (kindnet-880875) DBG | domain kindnet-880875 has defined MAC address 52:54:00:03:41:95 in network mk-kindnet-880875
	I0408 19:19:50.099538  190277 main.go:141] libmachine: (kindnet-880875) DBG | unable to find current IP address of domain kindnet-880875 in network mk-kindnet-880875
	I0408 19:19:50.099565  190277 main.go:141] libmachine: (kindnet-880875) DBG | I0408 19:19:50.099508  190482 retry.go:31] will retry after 590.582581ms: waiting for domain to come up
	I0408 19:19:50.665720  190032 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 19:19:50.665816  190032 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 19:19:50.671795  190032 start.go:563] Will wait 60s for crictl version
	I0408 19:19:50.671857  190032 ssh_runner.go:195] Run: which crictl
	I0408 19:19:50.676184  190032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 19:19:50.721484  190032 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 19:19:50.721574  190032 ssh_runner.go:195] Run: crio --version
	I0408 19:19:50.762182  190032 ssh_runner.go:195] Run: crio --version
	I0408 19:19:50.808028  190032 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 19:19:50.809473  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) Calling .GetIP
	I0408 19:19:50.813045  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:50.813518  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e2:54", ip: ""} in network mk-kubernetes-upgrade-958400: {Iface:virbr2 ExpiryTime:2025-04-08 20:18:44 +0000 UTC Type:0 Mac:52:54:00:64:e2:54 Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-958400 Clientid:01:52:54:00:64:e2:54}
	I0408 19:19:50.813575  190032 main.go:141] libmachine: (kubernetes-upgrade-958400) DBG | domain kubernetes-upgrade-958400 has defined IP address 192.168.50.182 and MAC address 52:54:00:64:e2:54 in network mk-kubernetes-upgrade-958400
	I0408 19:19:50.813819  190032 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0408 19:19:50.824510  190032 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-958400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kube
rnetes-upgrade-958400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.182 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 19:19:50.824650  190032 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 19:19:50.824704  190032 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:19:50.998506  190032 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 19:19:50.998543  190032 crio.go:433] Images already preloaded, skipping extraction
	I0408 19:19:50.998612  190032 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:19:51.429324  190032 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 19:19:51.429358  190032 cache_images.go:84] Images are preloaded, skipping loading
	I0408 19:19:51.429369  190032 kubeadm.go:934] updating node { 192.168.50.182 8443 v1.32.2 crio true true} ...
	I0408 19:19:51.429512  190032 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-958400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-958400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 19:19:51.429601  190032 ssh_runner.go:195] Run: crio config
	I0408 19:19:51.554885  190032 cni.go:84] Creating CNI manager for ""
	I0408 19:19:51.554913  190032 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:19:51.554928  190032 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 19:19:51.554970  190032 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.182 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-958400 NodeName:kubernetes-upgrade-958400 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 19:19:51.555172  190032 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-958400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.182"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.182"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 19:19:51.555305  190032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 19:19:51.579358  190032 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 19:19:51.579444  190032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 19:19:51.606411  190032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0408 19:19:51.633283  190032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 19:19:51.685235  190032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0408 19:19:51.722702  190032 ssh_runner.go:195] Run: grep 192.168.50.182	control-plane.minikube.internal$ /etc/hosts
	I0408 19:19:51.730085  190032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:19:51.980466  190032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:19:52.001483  190032 certs.go:68] Setting up /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400 for IP: 192.168.50.182
	I0408 19:19:52.001510  190032 certs.go:194] generating shared ca certs ...
	I0408 19:19:52.001534  190032 certs.go:226] acquiring lock for ca certs: {Name:mkd37ce74a5e6f5f5300314397402f7d571fc230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:19:52.001738  190032 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key
	I0408 19:19:52.001781  190032 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key
	I0408 19:19:52.001793  190032 certs.go:256] generating profile certs ...
	I0408 19:19:52.001916  190032 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/client.key
	I0408 19:19:52.001975  190032 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.key.d506f96d
	I0408 19:19:52.002032  190032 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/proxy-client.key
	I0408 19:19:52.002294  190032 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem (1338 bytes)
	W0408 19:19:52.002348  190032 certs.go:480] ignoring /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487_empty.pem, impossibly tiny 0 bytes
	I0408 19:19:52.002364  190032 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 19:19:52.002387  190032 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem (1082 bytes)
	I0408 19:19:52.002412  190032 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem (1123 bytes)
	I0408 19:19:52.002443  190032 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem (1679 bytes)
	I0408 19:19:52.002498  190032 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:19:52.003112  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 19:19:52.036452  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 19:19:52.113147  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 19:19:52.141815  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 19:19:52.172035  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0408 19:19:52.200411  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 19:19:52.231312  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 19:19:52.257107  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kubernetes-upgrade-958400/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 19:19:52.285399  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem --> /usr/share/ca-certificates/148487.pem (1338 bytes)
	I0408 19:19:52.311888  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /usr/share/ca-certificates/1484872.pem (1708 bytes)
	I0408 19:19:52.340302  190032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 19:19:52.371746  190032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 19:19:52.392814  190032 ssh_runner.go:195] Run: openssl version
	I0408 19:19:52.399131  190032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148487.pem && ln -fs /usr/share/ca-certificates/148487.pem /etc/ssl/certs/148487.pem"
	I0408 19:19:52.410322  190032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148487.pem
	I0408 19:19:52.415224  190032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 18:21 /usr/share/ca-certificates/148487.pem
	I0408 19:19:52.415301  190032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148487.pem
	I0408 19:19:52.421279  190032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/148487.pem /etc/ssl/certs/51391683.0"
	I0408 19:19:52.431845  190032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1484872.pem && ln -fs /usr/share/ca-certificates/1484872.pem /etc/ssl/certs/1484872.pem"
	I0408 19:19:52.443493  190032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1484872.pem
	I0408 19:19:52.448459  190032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 18:21 /usr/share/ca-certificates/1484872.pem
	I0408 19:19:52.448521  190032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1484872.pem
	I0408 19:19:52.454573  190032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1484872.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 19:19:52.464752  190032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 19:19:52.479927  190032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:19:52.485667  190032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:19:52.485755  190032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:19:52.493466  190032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 19:19:52.506523  190032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 19:19:52.512579  190032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 19:19:52.520403  190032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 19:19:52.528062  190032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 19:19:52.535861  190032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 19:19:52.543515  190032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 19:19:52.551090  190032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 19:19:52.558675  190032 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-958400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kuberne
tes-upgrade-958400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.182 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:19:52.558780  190032 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 19:19:52.558840  190032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:19:52.603149  190032 cri.go:89] found id: "9ad4b13821334d5cfd290f364a974c13f432c742f7e525761f72f081eefde4dd"
	I0408 19:19:52.603182  190032 cri.go:89] found id: "c6ec813b596bb2f189a591230569f06c98c3fcdbb0e6354c430c9e7a33ec4968"
	I0408 19:19:52.603188  190032 cri.go:89] found id: "1b564d804d291ac550e85a4cc3ef25760872f5c251eb267e2f8d6e71f5341dca"
	I0408 19:19:52.603209  190032 cri.go:89] found id: "b73612b2513710fc265568c8d3156eb71163ab9006ffa7a535ae1799cad34b43"
	I0408 19:19:52.603212  190032 cri.go:89] found id: "24571d755d0e87b15c69ac0dbb10d2176b89bf33c0586829089058fbacb43011"
	I0408 19:19:52.603217  190032 cri.go:89] found id: "7deaad2c8ffabde4eec01227434a79210afda50cae608b0a29f474778ca708cd"
	I0408 19:19:52.603222  190032 cri.go:89] found id: "0c00fd6e32b072ddd1c2163a0ced18be370a70b708bee0f23994f9ce463f8b70"
	I0408 19:19:52.603226  190032 cri.go:89] found id: "020bc53b009dde9ba25173ec036ed654e9aaac7442e1b61908d3205d52dd7669"
	I0408 19:19:52.603230  190032 cri.go:89] found id: ""
	I0408 19:19:52.603290  190032 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-958400 -n kubernetes-upgrade-958400
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-958400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-958400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-958400
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-958400: (1.196665393s)
--- FAIL: TestKubernetesUpgrade (441.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (275.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-257500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-257500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m35.11205923s)

                                                
                                                
-- stdout --
	* [old-k8s-version-257500] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20604
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-257500" primary control-plane node in "old-k8s-version-257500" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 19:22:54.486979  198268 out.go:345] Setting OutFile to fd 1 ...
	I0408 19:22:54.487144  198268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:22:54.487157  198268 out.go:358] Setting ErrFile to fd 2...
	I0408 19:22:54.487163  198268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:22:54.487361  198268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 19:22:54.487996  198268 out.go:352] Setting JSON to false
	I0408 19:22:54.489149  198268 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11120,"bootTime":1744129055,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 19:22:54.489278  198268 start.go:139] virtualization: kvm guest
	I0408 19:22:54.491403  198268 out.go:177] * [old-k8s-version-257500] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 19:22:54.492765  198268 out.go:177]   - MINIKUBE_LOCATION=20604
	I0408 19:22:54.492803  198268 notify.go:220] Checking for updates...
	I0408 19:22:54.495399  198268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:22:54.496860  198268 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:22:54.498552  198268 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:22:54.500016  198268 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 19:22:54.501379  198268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:22:54.503015  198268 config.go:182] Loaded profile config "bridge-880875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:22:54.503127  198268 config.go:182] Loaded profile config "enable-default-cni-880875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:22:54.503229  198268 config.go:182] Loaded profile config "flannel-880875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:22:54.503356  198268 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 19:22:54.543540  198268 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 19:22:54.544948  198268 start.go:297] selected driver: kvm2
	I0408 19:22:54.544969  198268 start.go:901] validating driver "kvm2" against <nil>
	I0408 19:22:54.544982  198268 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:22:54.546021  198268 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:22:54.546123  198268 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20604-141129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 19:22:54.564477  198268 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 19:22:54.564549  198268 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 19:22:54.564800  198268 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 19:22:54.564833  198268 cni.go:84] Creating CNI manager for ""
	I0408 19:22:54.564862  198268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:22:54.564870  198268 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 19:22:54.564932  198268 start.go:340] cluster config:
	{Name:old-k8s-version-257500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-257500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:22:54.565056  198268 iso.go:125] acquiring lock: {Name:mk6f89956dcd0ccd06b3c273592988c0e077c69a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:22:54.567977  198268 out.go:177] * Starting "old-k8s-version-257500" primary control-plane node in "old-k8s-version-257500" cluster
	I0408 19:22:54.569593  198268 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 19:22:54.569675  198268 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 19:22:54.569691  198268 cache.go:56] Caching tarball of preloaded images
	I0408 19:22:54.569815  198268 preload.go:172] Found /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 19:22:54.569845  198268 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 19:22:54.569993  198268 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/config.json ...
	I0408 19:22:54.570026  198268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/config.json: {Name:mkf300ac4e6cded789cf7d57c4a37d251a5d2db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:22:54.570236  198268 start.go:360] acquireMachinesLock for old-k8s-version-257500: {Name:mk9f7a747fe5c51efa93431b771c455683360918 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 19:22:57.328370  198268 start.go:364] duration metric: took 2.758095814s to acquireMachinesLock for "old-k8s-version-257500"
	I0408 19:22:57.328471  198268 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-257500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-257500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 19:22:57.328668  198268 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 19:22:57.330829  198268 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 19:22:57.331182  198268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:22:57.331270  198268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:22:57.353786  198268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0408 19:22:57.354448  198268 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:22:57.355138  198268 main.go:141] libmachine: Using API Version  1
	I0408 19:22:57.355157  198268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:22:57.355545  198268 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:22:57.355768  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetMachineName
	I0408 19:22:57.355947  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:22:57.356127  198268 start.go:159] libmachine.API.Create for "old-k8s-version-257500" (driver="kvm2")
	I0408 19:22:57.356156  198268 client.go:168] LocalClient.Create starting
	I0408 19:22:57.356196  198268 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem
	I0408 19:22:57.356254  198268 main.go:141] libmachine: Decoding PEM data...
	I0408 19:22:57.356278  198268 main.go:141] libmachine: Parsing certificate...
	I0408 19:22:57.356374  198268 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem
	I0408 19:22:57.356412  198268 main.go:141] libmachine: Decoding PEM data...
	I0408 19:22:57.356431  198268 main.go:141] libmachine: Parsing certificate...
	I0408 19:22:57.356457  198268 main.go:141] libmachine: Running pre-create checks...
	I0408 19:22:57.356472  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .PreCreateCheck
	I0408 19:22:57.356898  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetConfigRaw
	I0408 19:22:57.357407  198268 main.go:141] libmachine: Creating machine...
	I0408 19:22:57.357425  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .Create
	I0408 19:22:57.357627  198268 main.go:141] libmachine: (old-k8s-version-257500) creating KVM machine...
	I0408 19:22:57.357644  198268 main.go:141] libmachine: (old-k8s-version-257500) creating network...
	I0408 19:22:57.359455  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found existing default KVM network
	I0408 19:22:57.361142  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:22:57.360884  198418 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000113380}
	I0408 19:22:57.361175  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | created network xml: 
	I0408 19:22:57.361190  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | <network>
	I0408 19:22:57.361199  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG |   <name>mk-old-k8s-version-257500</name>
	I0408 19:22:57.361242  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG |   <dns enable='no'/>
	I0408 19:22:57.361260  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG |   
	I0408 19:22:57.361272  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0408 19:22:57.361284  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG |     <dhcp>
	I0408 19:22:57.361292  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0408 19:22:57.361300  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG |     </dhcp>
	I0408 19:22:57.361312  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG |   </ip>
	I0408 19:22:57.361331  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG |   
	I0408 19:22:57.361343  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | </network>
	I0408 19:22:57.361353  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | 
	I0408 19:22:57.368527  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | trying to create private KVM network mk-old-k8s-version-257500 192.168.39.0/24...
	I0408 19:22:57.472174  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | private KVM network mk-old-k8s-version-257500 192.168.39.0/24 created
	I0408 19:22:57.472328  198268 main.go:141] libmachine: (old-k8s-version-257500) setting up store path in /home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500 ...
	I0408 19:22:57.472368  198268 main.go:141] libmachine: (old-k8s-version-257500) building disk image from file:///home/jenkins/minikube-integration/20604-141129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0408 19:22:57.472388  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:22:57.472333  198418 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:22:57.472595  198268 main.go:141] libmachine: (old-k8s-version-257500) Downloading /home/jenkins/minikube-integration/20604-141129/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20604-141129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0408 19:22:57.824507  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:22:57.824327  198418 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa...
	I0408 19:22:58.413626  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:22:58.413468  198418 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/old-k8s-version-257500.rawdisk...
	I0408 19:22:58.413659  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | Writing magic tar header
	I0408 19:22:58.413675  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | Writing SSH key tar header
	I0408 19:22:58.413687  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:22:58.413655  198418 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500 ...
	I0408 19:22:58.413858  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500
	I0408 19:22:58.413883  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube/machines
	I0408 19:22:58.413903  198268 main.go:141] libmachine: (old-k8s-version-257500) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500 (perms=drwx------)
	I0408 19:22:58.413918  198268 main.go:141] libmachine: (old-k8s-version-257500) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube/machines (perms=drwxr-xr-x)
	I0408 19:22:58.413937  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:22:58.414041  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20604-141129
	I0408 19:22:58.414078  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0408 19:22:58.414093  198268 main.go:141] libmachine: (old-k8s-version-257500) setting executable bit set on /home/jenkins/minikube-integration/20604-141129/.minikube (perms=drwxr-xr-x)
	I0408 19:22:58.414115  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | checking permissions on dir: /home/jenkins
	I0408 19:22:58.414134  198268 main.go:141] libmachine: (old-k8s-version-257500) setting executable bit set on /home/jenkins/minikube-integration/20604-141129 (perms=drwxrwxr-x)
	I0408 19:22:58.414146  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | checking permissions on dir: /home
	I0408 19:22:58.414156  198268 main.go:141] libmachine: (old-k8s-version-257500) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 19:22:58.414173  198268 main.go:141] libmachine: (old-k8s-version-257500) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 19:22:58.414182  198268 main.go:141] libmachine: (old-k8s-version-257500) creating domain...
	I0408 19:22:58.414195  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | skipping /home - not owner
	I0408 19:22:58.415697  198268 main.go:141] libmachine: (old-k8s-version-257500) define libvirt domain using xml: 
	I0408 19:22:58.415725  198268 main.go:141] libmachine: (old-k8s-version-257500) <domain type='kvm'>
	I0408 19:22:58.415736  198268 main.go:141] libmachine: (old-k8s-version-257500)   <name>old-k8s-version-257500</name>
	I0408 19:22:58.415747  198268 main.go:141] libmachine: (old-k8s-version-257500)   <memory unit='MiB'>2200</memory>
	I0408 19:22:58.415766  198268 main.go:141] libmachine: (old-k8s-version-257500)   <vcpu>2</vcpu>
	I0408 19:22:58.415775  198268 main.go:141] libmachine: (old-k8s-version-257500)   <features>
	I0408 19:22:58.415814  198268 main.go:141] libmachine: (old-k8s-version-257500)     <acpi/>
	I0408 19:22:58.415839  198268 main.go:141] libmachine: (old-k8s-version-257500)     <apic/>
	I0408 19:22:58.415855  198268 main.go:141] libmachine: (old-k8s-version-257500)     <pae/>
	I0408 19:22:58.415866  198268 main.go:141] libmachine: (old-k8s-version-257500)     
	I0408 19:22:58.415879  198268 main.go:141] libmachine: (old-k8s-version-257500)   </features>
	I0408 19:22:58.415887  198268 main.go:141] libmachine: (old-k8s-version-257500)   <cpu mode='host-passthrough'>
	I0408 19:22:58.415897  198268 main.go:141] libmachine: (old-k8s-version-257500)   
	I0408 19:22:58.415904  198268 main.go:141] libmachine: (old-k8s-version-257500)   </cpu>
	I0408 19:22:58.415915  198268 main.go:141] libmachine: (old-k8s-version-257500)   <os>
	I0408 19:22:58.415925  198268 main.go:141] libmachine: (old-k8s-version-257500)     <type>hvm</type>
	I0408 19:22:58.415935  198268 main.go:141] libmachine: (old-k8s-version-257500)     <boot dev='cdrom'/>
	I0408 19:22:58.415949  198268 main.go:141] libmachine: (old-k8s-version-257500)     <boot dev='hd'/>
	I0408 19:22:58.415962  198268 main.go:141] libmachine: (old-k8s-version-257500)     <bootmenu enable='no'/>
	I0408 19:22:58.415971  198268 main.go:141] libmachine: (old-k8s-version-257500)   </os>
	I0408 19:22:58.415980  198268 main.go:141] libmachine: (old-k8s-version-257500)   <devices>
	I0408 19:22:58.415991  198268 main.go:141] libmachine: (old-k8s-version-257500)     <disk type='file' device='cdrom'>
	I0408 19:22:58.416007  198268 main.go:141] libmachine: (old-k8s-version-257500)       <source file='/home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/boot2docker.iso'/>
	I0408 19:22:58.416023  198268 main.go:141] libmachine: (old-k8s-version-257500)       <target dev='hdc' bus='scsi'/>
	I0408 19:22:58.416035  198268 main.go:141] libmachine: (old-k8s-version-257500)       <readonly/>
	I0408 19:22:58.416045  198268 main.go:141] libmachine: (old-k8s-version-257500)     </disk>
	I0408 19:22:58.416118  198268 main.go:141] libmachine: (old-k8s-version-257500)     <disk type='file' device='disk'>
	I0408 19:22:58.416156  198268 main.go:141] libmachine: (old-k8s-version-257500)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 19:22:58.416181  198268 main.go:141] libmachine: (old-k8s-version-257500)       <source file='/home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/old-k8s-version-257500.rawdisk'/>
	I0408 19:22:58.416248  198268 main.go:141] libmachine: (old-k8s-version-257500)       <target dev='hda' bus='virtio'/>
	I0408 19:22:58.416281  198268 main.go:141] libmachine: (old-k8s-version-257500)     </disk>
	I0408 19:22:58.416294  198268 main.go:141] libmachine: (old-k8s-version-257500)     <interface type='network'>
	I0408 19:22:58.416306  198268 main.go:141] libmachine: (old-k8s-version-257500)       <source network='mk-old-k8s-version-257500'/>
	I0408 19:22:58.416317  198268 main.go:141] libmachine: (old-k8s-version-257500)       <model type='virtio'/>
	I0408 19:22:58.416330  198268 main.go:141] libmachine: (old-k8s-version-257500)     </interface>
	I0408 19:22:58.416342  198268 main.go:141] libmachine: (old-k8s-version-257500)     <interface type='network'>
	I0408 19:22:58.416356  198268 main.go:141] libmachine: (old-k8s-version-257500)       <source network='default'/>
	I0408 19:22:58.416367  198268 main.go:141] libmachine: (old-k8s-version-257500)       <model type='virtio'/>
	I0408 19:22:58.416379  198268 main.go:141] libmachine: (old-k8s-version-257500)     </interface>
	I0408 19:22:58.416400  198268 main.go:141] libmachine: (old-k8s-version-257500)     <serial type='pty'>
	I0408 19:22:58.416428  198268 main.go:141] libmachine: (old-k8s-version-257500)       <target port='0'/>
	I0408 19:22:58.416452  198268 main.go:141] libmachine: (old-k8s-version-257500)     </serial>
	I0408 19:22:58.416466  198268 main.go:141] libmachine: (old-k8s-version-257500)     <console type='pty'>
	I0408 19:22:58.416483  198268 main.go:141] libmachine: (old-k8s-version-257500)       <target type='serial' port='0'/>
	I0408 19:22:58.416494  198268 main.go:141] libmachine: (old-k8s-version-257500)     </console>
	I0408 19:22:58.416504  198268 main.go:141] libmachine: (old-k8s-version-257500)     <rng model='virtio'>
	I0408 19:22:58.416517  198268 main.go:141] libmachine: (old-k8s-version-257500)       <backend model='random'>/dev/random</backend>
	I0408 19:22:58.416531  198268 main.go:141] libmachine: (old-k8s-version-257500)     </rng>
	I0408 19:22:58.416543  198268 main.go:141] libmachine: (old-k8s-version-257500)     
	I0408 19:22:58.416554  198268 main.go:141] libmachine: (old-k8s-version-257500)     
	I0408 19:22:58.416581  198268 main.go:141] libmachine: (old-k8s-version-257500)   </devices>
	I0408 19:22:58.416611  198268 main.go:141] libmachine: (old-k8s-version-257500) </domain>
	I0408 19:22:58.416644  198268 main.go:141] libmachine: (old-k8s-version-257500) 
	I0408 19:22:58.422053  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:23:10:39 in network default
	I0408 19:22:58.422876  198268 main.go:141] libmachine: (old-k8s-version-257500) starting domain...
	I0408 19:22:58.422901  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:22:58.422911  198268 main.go:141] libmachine: (old-k8s-version-257500) ensuring networks are active...
	I0408 19:22:58.423813  198268 main.go:141] libmachine: (old-k8s-version-257500) Ensuring network default is active
	I0408 19:22:58.424371  198268 main.go:141] libmachine: (old-k8s-version-257500) Ensuring network mk-old-k8s-version-257500 is active
	I0408 19:22:58.425152  198268 main.go:141] libmachine: (old-k8s-version-257500) getting domain XML...
	I0408 19:22:58.426180  198268 main.go:141] libmachine: (old-k8s-version-257500) creating domain...
	I0408 19:23:00.259676  198268 main.go:141] libmachine: (old-k8s-version-257500) waiting for IP...
	I0408 19:23:00.260816  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:00.261614  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:00.261637  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:00.261517  198418 retry.go:31] will retry after 252.097451ms: waiting for domain to come up
	I0408 19:23:00.515222  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:00.515896  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:00.515925  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:00.515870  198418 retry.go:31] will retry after 319.135243ms: waiting for domain to come up
	I0408 19:23:00.836637  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:00.837210  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:00.837233  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:00.837135  198418 retry.go:31] will retry after 426.211637ms: waiting for domain to come up
	I0408 19:23:01.269963  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:01.270004  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:01.270022  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:01.267802  198418 retry.go:31] will retry after 588.844985ms: waiting for domain to come up
	I0408 19:23:01.858988  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:01.859597  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:01.859624  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:01.859587  198418 retry.go:31] will retry after 507.01837ms: waiting for domain to come up
	I0408 19:23:02.368482  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:02.369185  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:02.369218  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:02.369091  198418 retry.go:31] will retry after 622.779742ms: waiting for domain to come up
	I0408 19:23:02.994197  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:02.994637  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:02.994670  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:02.994604  198418 retry.go:31] will retry after 822.166675ms: waiting for domain to come up
	I0408 19:23:03.818936  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:03.819345  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:03.819403  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:03.819327  198418 retry.go:31] will retry after 1.466030512s: waiting for domain to come up
	I0408 19:23:05.291061  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:05.291959  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:05.292014  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:05.291974  198418 retry.go:31] will retry after 1.196386795s: waiting for domain to come up
	I0408 19:23:06.490231  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:06.490771  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:06.490805  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:06.490742  198418 retry.go:31] will retry after 2.060880175s: waiting for domain to come up
	I0408 19:23:08.553694  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:08.554352  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:08.554383  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:08.554313  198418 retry.go:31] will retry after 2.309267043s: waiting for domain to come up
	I0408 19:23:10.867001  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:10.867743  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:10.867771  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:10.867640  198418 retry.go:31] will retry after 2.516284552s: waiting for domain to come up
	I0408 19:23:13.385497  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:13.386287  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:13.386319  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:13.386249  198418 retry.go:31] will retry after 3.697797392s: waiting for domain to come up
	I0408 19:23:17.087212  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:17.087715  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:23:17.087742  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:23:17.087665  198418 retry.go:31] will retry after 4.688868315s: waiting for domain to come up
	I0408 19:23:21.779009  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:21.779579  198268 main.go:141] libmachine: (old-k8s-version-257500) found domain IP: 192.168.39.192
	I0408 19:23:21.779602  198268 main.go:141] libmachine: (old-k8s-version-257500) reserving static IP address...
	I0408 19:23:21.779626  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has current primary IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:21.779977  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-257500", mac: "52:54:00:00:35:99", ip: "192.168.39.192"} in network mk-old-k8s-version-257500
	I0408 19:23:21.884100  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | Getting to WaitForSSH function...
	I0408 19:23:21.884132  198268 main.go:141] libmachine: (old-k8s-version-257500) reserved static IP address 192.168.39.192 for domain old-k8s-version-257500
	I0408 19:23:21.884144  198268 main.go:141] libmachine: (old-k8s-version-257500) waiting for SSH...
	I0408 19:23:21.887711  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:21.888240  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:35:99}
	I0408 19:23:21.888281  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:21.888590  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | Using SSH client type: external
	I0408 19:23:21.888618  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | Using SSH private key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa (-rw-------)
	I0408 19:23:21.888651  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 19:23:21.888666  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | About to run SSH command:
	I0408 19:23:21.888711  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | exit 0
	I0408 19:23:22.022079  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | SSH cmd err, output: <nil>: 
	I0408 19:23:22.022403  198268 main.go:141] libmachine: (old-k8s-version-257500) KVM machine creation complete
	I0408 19:23:22.022785  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetConfigRaw
	I0408 19:23:22.023452  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:23:22.023696  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:23:22.023880  198268 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 19:23:22.023895  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetState
	I0408 19:23:22.025734  198268 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 19:23:22.025754  198268 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 19:23:22.025762  198268 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 19:23:22.025771  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:23:22.028302  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.028701  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:22.028732  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.029051  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:23:22.029306  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:22.029497  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:22.029650  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:23:22.029810  198268 main.go:141] libmachine: Using SSH client type: native
	I0408 19:23:22.030096  198268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0408 19:23:22.030111  198268 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 19:23:22.145329  198268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:23:22.145355  198268 main.go:141] libmachine: Detecting the provisioner...
	I0408 19:23:22.145364  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:23:22.148681  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.149071  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:22.149106  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.149331  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:23:22.149556  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:22.149746  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:22.149973  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:23:22.150198  198268 main.go:141] libmachine: Using SSH client type: native
	I0408 19:23:22.150486  198268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0408 19:23:22.150503  198268 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 19:23:22.271539  198268 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 19:23:22.271592  198268 main.go:141] libmachine: found compatible host: buildroot
	I0408 19:23:22.271600  198268 main.go:141] libmachine: Provisioning with buildroot...
	I0408 19:23:22.271611  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetMachineName
	I0408 19:23:22.271897  198268 buildroot.go:166] provisioning hostname "old-k8s-version-257500"
	I0408 19:23:22.271927  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetMachineName
	I0408 19:23:22.272140  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:23:22.275245  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.275598  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:22.275626  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.275819  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:23:22.276029  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:22.276218  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:22.276381  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:23:22.276568  198268 main.go:141] libmachine: Using SSH client type: native
	I0408 19:23:22.276774  198268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0408 19:23:22.276787  198268 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-257500 && echo "old-k8s-version-257500" | sudo tee /etc/hostname
	I0408 19:23:22.414223  198268 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-257500
	
	I0408 19:23:22.414264  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:23:22.417368  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.417856  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:22.417891  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.418187  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:23:22.418447  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:22.418755  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:22.419072  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:23:22.419382  198268 main.go:141] libmachine: Using SSH client type: native
	I0408 19:23:22.419682  198268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0408 19:23:22.419714  198268 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-257500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-257500/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-257500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 19:23:22.543436  198268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:23:22.543480  198268 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20604-141129/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-141129/.minikube}
	I0408 19:23:22.543508  198268 buildroot.go:174] setting up certificates
	I0408 19:23:22.543523  198268 provision.go:84] configureAuth start
	I0408 19:23:22.543541  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetMachineName
	I0408 19:23:22.543863  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetIP
	I0408 19:23:22.547129  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.547493  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:22.547528  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.547715  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:23:22.550477  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.550845  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:22.550881  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.551105  198268 provision.go:143] copyHostCerts
	I0408 19:23:22.551174  198268 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem, removing ...
	I0408 19:23:22.551202  198268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem
	I0408 19:23:22.551291  198268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem (1679 bytes)
	I0408 19:23:22.551395  198268 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem, removing ...
	I0408 19:23:22.551404  198268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem
	I0408 19:23:22.551428  198268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem (1082 bytes)
	I0408 19:23:22.551482  198268 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem, removing ...
	I0408 19:23:22.551489  198268 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem
	I0408 19:23:22.551508  198268 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem (1123 bytes)
	I0408 19:23:22.551564  198268 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-257500 san=[127.0.0.1 192.168.39.192 localhost minikube old-k8s-version-257500]
	I0408 19:23:22.615251  198268 provision.go:177] copyRemoteCerts
	I0408 19:23:22.615312  198268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 19:23:22.615344  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:23:22.618556  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.618878  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:22.618904  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.619149  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:23:22.619385  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:22.619576  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:23:22.619902  198268 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa Username:docker}
	I0408 19:23:22.708550  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 19:23:22.737542  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 19:23:22.765989  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 19:23:22.792663  198268 provision.go:87] duration metric: took 249.121134ms to configureAuth
	I0408 19:23:22.792695  198268 buildroot.go:189] setting minikube options for container-runtime
	I0408 19:23:22.792866  198268 config.go:182] Loaded profile config "old-k8s-version-257500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 19:23:22.792959  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:23:22.795599  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.796034  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:22.796067  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:22.796278  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:23:22.796494  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:22.796632  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:22.796790  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:23:22.796948  198268 main.go:141] libmachine: Using SSH client type: native
	I0408 19:23:22.797193  198268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0408 19:23:22.797217  198268 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 19:23:23.039418  198268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 19:23:23.039446  198268 main.go:141] libmachine: Checking connection to Docker...
	I0408 19:23:23.039454  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetURL
	I0408 19:23:23.040904  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | using libvirt version 6000000
	I0408 19:23:23.043249  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.043573  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:23.043604  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.043754  198268 main.go:141] libmachine: Docker is up and running!
	I0408 19:23:23.043768  198268 main.go:141] libmachine: Reticulating splines...
	I0408 19:23:23.043778  198268 client.go:171] duration metric: took 25.687612742s to LocalClient.Create
	I0408 19:23:23.043815  198268 start.go:167] duration metric: took 25.687688223s to libmachine.API.Create "old-k8s-version-257500"
	I0408 19:23:23.043828  198268 start.go:293] postStartSetup for "old-k8s-version-257500" (driver="kvm2")
	I0408 19:23:23.043839  198268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 19:23:23.043861  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:23:23.044113  198268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 19:23:23.044154  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:23:23.046265  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.046574  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:23.046602  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.046792  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:23:23.047025  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:23.047226  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:23:23.047402  198268 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa Username:docker}
	I0408 19:23:23.136573  198268 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 19:23:23.141186  198268 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 19:23:23.141213  198268 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/addons for local assets ...
	I0408 19:23:23.141301  198268 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/files for local assets ...
	I0408 19:23:23.141418  198268 filesync.go:149] local asset: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem -> 1484872.pem in /etc/ssl/certs
	I0408 19:23:23.141520  198268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 19:23:23.152534  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:23:23.179311  198268 start.go:296] duration metric: took 135.466005ms for postStartSetup
	I0408 19:23:23.179373  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetConfigRaw
	I0408 19:23:23.180021  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetIP
	I0408 19:23:23.183311  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.183685  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:23.183715  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.183928  198268 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/config.json ...
	I0408 19:23:23.184146  198268 start.go:128] duration metric: took 25.855460195s to createHost
	I0408 19:23:23.184173  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:23:23.186838  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.187320  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:23.187354  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.187531  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:23:23.187748  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:23.187957  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:23.188141  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:23:23.188322  198268 main.go:141] libmachine: Using SSH client type: native
	I0408 19:23:23.188536  198268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0408 19:23:23.188547  198268 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 19:23:23.302916  198268 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744140203.270140486
	
	I0408 19:23:23.302943  198268 fix.go:216] guest clock: 1744140203.270140486
	I0408 19:23:23.302951  198268 fix.go:229] Guest: 2025-04-08 19:23:23.270140486 +0000 UTC Remote: 2025-04-08 19:23:23.184159141 +0000 UTC m=+28.738075788 (delta=85.981345ms)
	I0408 19:23:23.302973  198268 fix.go:200] guest clock delta is within tolerance: 85.981345ms
	I0408 19:23:23.302978  198268 start.go:83] releasing machines lock for "old-k8s-version-257500", held for 25.974551999s
	I0408 19:23:23.303002  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:23:23.303338  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetIP
	I0408 19:23:23.306523  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.306927  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:23.306960  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.307159  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:23:23.307803  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:23:23.307992  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:23:23.308094  198268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 19:23:23.308151  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:23:23.308302  198268 ssh_runner.go:195] Run: cat /version.json
	I0408 19:23:23.308331  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:23:23.311045  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.311273  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.311448  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:23.311480  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.311716  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:23:23.311827  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:23.311857  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:23.311885  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:23.312031  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:23:23.312155  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:23:23.312232  198268 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa Username:docker}
	I0408 19:23:23.312331  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:23:23.312466  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:23:23.312575  198268 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa Username:docker}
	I0408 19:23:23.427008  198268 ssh_runner.go:195] Run: systemctl --version
	I0408 19:23:23.435398  198268 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 19:23:23.603163  198268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 19:23:23.610194  198268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 19:23:23.610287  198268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 19:23:23.630960  198268 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 19:23:23.630992  198268 start.go:495] detecting cgroup driver to use...
	I0408 19:23:23.631090  198268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 19:23:23.651120  198268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 19:23:23.673114  198268 docker.go:217] disabling cri-docker service (if available) ...
	I0408 19:23:23.673206  198268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 19:23:23.690310  198268 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 19:23:23.705740  198268 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 19:23:23.838354  198268 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 19:23:24.019183  198268 docker.go:233] disabling docker service ...
	I0408 19:23:24.019280  198268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 19:23:24.037475  198268 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 19:23:24.052806  198268 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 19:23:24.189727  198268 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 19:23:24.321405  198268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 19:23:24.338042  198268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 19:23:24.359088  198268 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0408 19:23:24.359172  198268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:23:24.370522  198268 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 19:23:24.370589  198268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:23:24.381615  198268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:23:24.394160  198268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:23:24.408189  198268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 19:23:24.421533  198268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 19:23:24.434496  198268 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 19:23:24.434574  198268 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 19:23:24.448630  198268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 19:23:24.460694  198268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:23:24.607345  198268 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 19:23:24.715626  198268 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 19:23:24.715724  198268 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 19:23:24.720702  198268 start.go:563] Will wait 60s for crictl version
	I0408 19:23:24.720777  198268 ssh_runner.go:195] Run: which crictl
	I0408 19:23:24.724856  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 19:23:24.774777  198268 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 19:23:24.774863  198268 ssh_runner.go:195] Run: crio --version
	I0408 19:23:24.810759  198268 ssh_runner.go:195] Run: crio --version
	I0408 19:23:24.846180  198268 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0408 19:23:24.847563  198268 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetIP
	I0408 19:23:24.851396  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:24.851771  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:23:24.851812  198268 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:23:24.852130  198268 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 19:23:24.856407  198268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:23:24.869143  198268 kubeadm.go:883] updating cluster {Name:old-k8s-version-257500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-257500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.192 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 19:23:24.869282  198268 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 19:23:24.869419  198268 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:23:24.905724  198268 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 19:23:24.905809  198268 ssh_runner.go:195] Run: which lz4
	I0408 19:23:24.911444  198268 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 19:23:24.916094  198268 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 19:23:24.916132  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0408 19:23:26.526560  198268 crio.go:462] duration metric: took 1.615148878s to copy over tarball
	I0408 19:23:26.526639  198268 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 19:23:29.552291  198268 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025617318s)
	I0408 19:23:29.552328  198268 crio.go:469] duration metric: took 3.025732532s to extract the tarball
	I0408 19:23:29.552350  198268 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 19:23:29.597563  198268 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:23:29.660613  198268 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 19:23:29.660658  198268 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 19:23:29.660758  198268 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:23:29.660835  198268 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:23:29.660868  198268 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0408 19:23:29.660926  198268 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0408 19:23:29.661135  198268 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:23:29.660845  198268 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0408 19:23:29.661198  198268 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:23:29.661364  198268 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:23:29.663247  198268 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:23:29.663342  198268 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0408 19:23:29.663388  198268 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:23:29.663446  198268 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0408 19:23:29.663519  198268 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0408 19:23:29.663248  198268 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:23:29.663622  198268 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:23:29.663638  198268 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:23:29.820807  198268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:23:29.823138  198268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0408 19:23:29.829431  198268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0408 19:23:29.831173  198268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:23:29.837494  198268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:23:29.842532  198268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:23:29.868870  198268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0408 19:23:29.993745  198268 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0408 19:23:29.993816  198268 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:23:29.993890  198268 ssh_runner.go:195] Run: which crictl
	I0408 19:23:30.037859  198268 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0408 19:23:30.037927  198268 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0408 19:23:30.037981  198268 ssh_runner.go:195] Run: which crictl
	I0408 19:23:30.078324  198268 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0408 19:23:30.078380  198268 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0408 19:23:30.078441  198268 ssh_runner.go:195] Run: which crictl
	I0408 19:23:30.085441  198268 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0408 19:23:30.085509  198268 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:23:30.085537  198268 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0408 19:23:30.085607  198268 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:23:30.085563  198268 ssh_runner.go:195] Run: which crictl
	I0408 19:23:30.085664  198268 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0408 19:23:30.085715  198268 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:23:30.085667  198268 ssh_runner.go:195] Run: which crictl
	I0408 19:23:30.085756  198268 ssh_runner.go:195] Run: which crictl
	I0408 19:23:30.101573  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:23:30.101657  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 19:23:30.101708  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:23:30.101666  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 19:23:30.101772  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:23:30.101775  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:23:30.101952  198268 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0408 19:23:30.102000  198268 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0408 19:23:30.102040  198268 ssh_runner.go:195] Run: which crictl
	I0408 19:23:30.243533  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 19:23:30.243700  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:23:30.244011  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 19:23:30.256788  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:23:30.256840  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:23:30.256842  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:23:30.256909  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 19:23:30.396800  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 19:23:30.396856  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:23:30.396890  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 19:23:30.400137  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:23:30.400178  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:23:30.418748  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:23:30.418862  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 19:23:30.572667  198268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0408 19:23:30.572740  198268 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 19:23:30.572760  198268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0408 19:23:30.572767  198268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0408 19:23:30.572852  198268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0408 19:23:30.578788  198268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0408 19:23:30.582283  198268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0408 19:23:30.616477  198268 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0408 19:23:31.200126  198268 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:23:31.344888  198268 cache_images.go:92] duration metric: took 1.684207564s to LoadCachedImages
	W0408 19:23:31.345001  198268 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0408 19:23:31.345062  198268 kubeadm.go:934] updating node { 192.168.39.192 8443 v1.20.0 crio true true} ...
	I0408 19:23:31.345194  198268 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-257500 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-257500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 19:23:31.345273  198268 ssh_runner.go:195] Run: crio config
	I0408 19:23:31.392266  198268 cni.go:84] Creating CNI manager for ""
	I0408 19:23:31.392291  198268 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:23:31.392303  198268 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 19:23:31.392323  198268 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.192 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-257500 NodeName:old-k8s-version-257500 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.192"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 19:23:31.392444  198268 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-257500"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.192
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.192"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 19:23:31.392512  198268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 19:23:31.403913  198268 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 19:23:31.404003  198268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 19:23:31.416163  198268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0408 19:23:31.435133  198268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 19:23:31.454898  198268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0408 19:23:31.474208  198268 ssh_runner.go:195] Run: grep 192.168.39.192	control-plane.minikube.internal$ /etc/hosts
	I0408 19:23:31.478567  198268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.192	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:23:31.492473  198268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:23:31.630774  198268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:23:31.649621  198268 certs.go:68] Setting up /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500 for IP: 192.168.39.192
	I0408 19:23:31.649653  198268 certs.go:194] generating shared ca certs ...
	I0408 19:23:31.649677  198268 certs.go:226] acquiring lock for ca certs: {Name:mkd37ce74a5e6f5f5300314397402f7d571fc230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:23:31.649919  198268 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key
	I0408 19:23:31.650002  198268 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key
	I0408 19:23:31.650020  198268 certs.go:256] generating profile certs ...
	I0408 19:23:31.650106  198268 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/client.key
	I0408 19:23:31.650143  198268 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/client.crt with IP's: []
	I0408 19:23:31.806158  198268 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/client.crt ...
	I0408 19:23:31.806197  198268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/client.crt: {Name:mk45d755b04d80cedc15880e1b9f2e2228a8675a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:23:31.806442  198268 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/client.key ...
	I0408 19:23:31.806466  198268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/client.key: {Name:mkd5bbc1876de7ec95b0872c873e7cfd11033ac2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:23:31.806563  198268 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.key.31857a68
	I0408 19:23:31.806579  198268 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.crt.31857a68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.192]
	I0408 19:23:31.855748  198268 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.crt.31857a68 ...
	I0408 19:23:31.855785  198268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.crt.31857a68: {Name:mk18ddd6b97f730b997a9f6f77ba1476fa33d0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:23:31.915942  198268 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.key.31857a68 ...
	I0408 19:23:31.915997  198268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.key.31857a68: {Name:mk07bfe97f9ed0efbc4a928d924c9911ba0d2072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:23:31.916169  198268 certs.go:381] copying /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.crt.31857a68 -> /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.crt
	I0408 19:23:31.916290  198268 certs.go:385] copying /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.key.31857a68 -> /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.key
	I0408 19:23:31.916377  198268 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/proxy-client.key
	I0408 19:23:31.916403  198268 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/proxy-client.crt with IP's: []
	I0408 19:23:31.997341  198268 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/proxy-client.crt ...
	I0408 19:23:31.997376  198268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/proxy-client.crt: {Name:mk45e17f8e69f045f8390f6daf440be261ae351a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:23:31.997583  198268 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/proxy-client.key ...
	I0408 19:23:31.997603  198268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/proxy-client.key: {Name:mk8067dd56dc5132b5490374f5c956c6ac16b3be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:23:31.997828  198268 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem (1338 bytes)
	W0408 19:23:31.997920  198268 certs.go:480] ignoring /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487_empty.pem, impossibly tiny 0 bytes
	I0408 19:23:31.997973  198268 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 19:23:31.998007  198268 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem (1082 bytes)
	I0408 19:23:31.998041  198268 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem (1123 bytes)
	I0408 19:23:31.998073  198268 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem (1679 bytes)
	I0408 19:23:31.998136  198268 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:23:31.998705  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 19:23:32.026121  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 19:23:32.053644  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 19:23:32.081631  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 19:23:32.110132  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 19:23:32.139641  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 19:23:32.168328  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 19:23:32.195832  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 19:23:32.235708  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /usr/share/ca-certificates/1484872.pem (1708 bytes)
	I0408 19:23:32.265767  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 19:23:32.298869  198268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem --> /usr/share/ca-certificates/148487.pem (1338 bytes)
	I0408 19:23:32.330536  198268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 19:23:32.352749  198268 ssh_runner.go:195] Run: openssl version
	I0408 19:23:32.359702  198268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 19:23:32.373414  198268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:23:32.378932  198268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:23:32.379022  198268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:23:32.386112  198268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 19:23:32.399099  198268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148487.pem && ln -fs /usr/share/ca-certificates/148487.pem /etc/ssl/certs/148487.pem"
	I0408 19:23:32.413296  198268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148487.pem
	I0408 19:23:32.419161  198268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 18:21 /usr/share/ca-certificates/148487.pem
	I0408 19:23:32.419344  198268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148487.pem
	I0408 19:23:32.426596  198268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/148487.pem /etc/ssl/certs/51391683.0"
	I0408 19:23:32.440855  198268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1484872.pem && ln -fs /usr/share/ca-certificates/1484872.pem /etc/ssl/certs/1484872.pem"
	I0408 19:23:32.454836  198268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1484872.pem
	I0408 19:23:32.460898  198268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 18:21 /usr/share/ca-certificates/1484872.pem
	I0408 19:23:32.460999  198268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1484872.pem
	I0408 19:23:32.467879  198268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1484872.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 19:23:32.481823  198268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 19:23:32.495716  198268 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 19:23:32.495796  198268 kubeadm.go:392] StartCluster: {Name:old-k8s-version-257500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-257500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.192 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:23:32.495899  198268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 19:23:32.495974  198268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:23:32.553680  198268 cri.go:89] found id: ""
	I0408 19:23:32.553769  198268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 19:23:32.569007  198268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 19:23:32.595043  198268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:23:32.612386  198268 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:23:32.612416  198268 kubeadm.go:157] found existing configuration files:
	
	I0408 19:23:32.612496  198268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:23:32.626178  198268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:23:32.626263  198268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:23:32.639861  198268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:23:32.653888  198268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:23:32.653980  198268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:23:32.668055  198268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:23:32.681469  198268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:23:32.681565  198268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:23:32.692056  198268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:23:32.703438  198268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:23:32.703521  198268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:23:32.714732  198268 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 19:23:32.855799  198268 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0408 19:23:32.855922  198268 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 19:23:33.012405  198268 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 19:23:33.012614  198268 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 19:23:33.012795  198268 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 19:23:33.203729  198268 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 19:23:33.243479  198268 out.go:235]   - Generating certificates and keys ...
	I0408 19:23:33.243635  198268 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 19:23:33.243734  198268 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 19:23:33.298939  198268 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 19:23:33.516332  198268 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0408 19:23:33.713926  198268 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0408 19:23:33.831222  198268 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0408 19:23:34.075445  198268 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0408 19:23:34.075747  198268 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-257500] and IPs [192.168.39.192 127.0.0.1 ::1]
	I0408 19:23:34.189108  198268 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0408 19:23:34.189518  198268 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-257500] and IPs [192.168.39.192 127.0.0.1 ::1]
	I0408 19:23:34.288792  198268 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 19:23:34.521228  198268 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 19:23:34.650768  198268 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0408 19:23:34.651037  198268 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 19:23:35.027371  198268 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 19:23:35.393159  198268 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 19:23:35.532438  198268 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 19:23:35.867864  198268 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 19:23:35.886846  198268 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 19:23:35.888411  198268 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 19:23:35.888476  198268 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 19:23:36.075511  198268 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 19:23:36.077026  198268 out.go:235]   - Booting up control plane ...
	I0408 19:23:36.077181  198268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 19:23:36.088640  198268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 19:23:36.088785  198268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 19:23:36.091643  198268 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 19:23:36.096978  198268 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 19:24:16.087492  198268 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0408 19:24:16.088782  198268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:24:16.089059  198268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:24:21.089265  198268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:24:21.089558  198268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:24:31.088668  198268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:24:31.088875  198268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:24:51.088156  198268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:24:51.088439  198268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:25:31.088832  198268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:25:31.089066  198268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:25:31.089094  198268 kubeadm.go:310] 
	I0408 19:25:31.089146  198268 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0408 19:25:31.089240  198268 kubeadm.go:310] 		timed out waiting for the condition
	I0408 19:25:31.089257  198268 kubeadm.go:310] 
	I0408 19:25:31.089306  198268 kubeadm.go:310] 	This error is likely caused by:
	I0408 19:25:31.089368  198268 kubeadm.go:310] 		- The kubelet is not running
	I0408 19:25:31.089474  198268 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 19:25:31.089482  198268 kubeadm.go:310] 
	I0408 19:25:31.089608  198268 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 19:25:31.089657  198268 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0408 19:25:31.089706  198268 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0408 19:25:31.089721  198268 kubeadm.go:310] 
	I0408 19:25:31.089879  198268 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 19:25:31.089967  198268 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 19:25:31.089975  198268 kubeadm.go:310] 
	I0408 19:25:31.090061  198268 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 19:25:31.090199  198268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 19:25:31.090325  198268 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0408 19:25:31.090421  198268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 19:25:31.090438  198268 kubeadm.go:310] 
	I0408 19:25:31.090783  198268 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:25:31.090911  198268 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 19:25:31.090984  198268 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0408 19:25:31.091137  198268 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-257500] and IPs [192.168.39.192 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-257500] and IPs [192.168.39.192 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-257500] and IPs [192.168.39.192 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-257500] and IPs [192.168.39.192 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 19:25:31.091190  198268 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 19:25:32.683968  198268 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.592751928s)
	I0408 19:25:32.684044  198268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 19:25:32.698260  198268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:25:32.708236  198268 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:25:32.708261  198268 kubeadm.go:157] found existing configuration files:
	
	I0408 19:25:32.708320  198268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:25:32.719269  198268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:25:32.719341  198268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:25:32.730661  198268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:25:32.740895  198268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:25:32.740970  198268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:25:32.751222  198268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:25:32.761613  198268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:25:32.761681  198268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:25:32.772347  198268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:25:32.781868  198268 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:25:32.781935  198268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:25:32.792627  198268 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 19:25:32.865592  198268 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0408 19:25:32.865676  198268 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 19:25:33.010348  198268 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 19:25:33.010522  198268 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 19:25:33.010637  198268 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 19:25:33.194579  198268 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 19:25:33.197058  198268 out.go:235]   - Generating certificates and keys ...
	I0408 19:25:33.197214  198268 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 19:25:33.197330  198268 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 19:25:33.197474  198268 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 19:25:33.197573  198268 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0408 19:25:33.197670  198268 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 19:25:33.197755  198268 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0408 19:25:33.197850  198268 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0408 19:25:33.197952  198268 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0408 19:25:33.198053  198268 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 19:25:33.198167  198268 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 19:25:33.198240  198268 kubeadm.go:310] [certs] Using the existing "sa" key
	I0408 19:25:33.198337  198268 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 19:25:33.434760  198268 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 19:25:33.584982  198268 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 19:25:33.686825  198268 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 19:25:33.758031  198268 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 19:25:33.773017  198268 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 19:25:33.774499  198268 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 19:25:33.774564  198268 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 19:25:33.923574  198268 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 19:25:33.925790  198268 out.go:235]   - Booting up control plane ...
	I0408 19:25:33.925939  198268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 19:25:33.931094  198268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 19:25:33.932182  198268 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 19:25:33.932863  198268 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 19:25:33.935212  198268 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 19:26:13.937251  198268 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0408 19:26:13.937346  198268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:26:13.937600  198268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:26:18.937813  198268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:26:18.938102  198268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:26:28.938493  198268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:26:28.938766  198268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:26:48.937954  198268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:26:48.938281  198268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:27:28.937730  198268 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:27:28.938020  198268 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:27:28.938042  198268 kubeadm.go:310] 
	I0408 19:27:28.938117  198268 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0408 19:27:28.938221  198268 kubeadm.go:310] 		timed out waiting for the condition
	I0408 19:27:28.938233  198268 kubeadm.go:310] 
	I0408 19:27:28.938284  198268 kubeadm.go:310] 	This error is likely caused by:
	I0408 19:27:28.938332  198268 kubeadm.go:310] 		- The kubelet is not running
	I0408 19:27:28.938455  198268 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 19:27:28.938464  198268 kubeadm.go:310] 
	I0408 19:27:28.938590  198268 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 19:27:28.938636  198268 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0408 19:27:28.938677  198268 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0408 19:27:28.938686  198268 kubeadm.go:310] 
	I0408 19:27:28.938817  198268 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 19:27:28.938929  198268 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 19:27:28.938942  198268 kubeadm.go:310] 
	I0408 19:27:28.939146  198268 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 19:27:28.939295  198268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 19:27:28.939403  198268 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0408 19:27:28.939512  198268 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 19:27:28.939524  198268 kubeadm.go:310] 
	I0408 19:27:28.939674  198268 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:27:28.939801  198268 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 19:27:28.939903  198268 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0408 19:27:28.940019  198268 kubeadm.go:394] duration metric: took 3m56.444226801s to StartCluster
	I0408 19:27:28.940072  198268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:27:28.940141  198268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:27:28.975973  198268 cri.go:89] found id: ""
	I0408 19:27:28.976008  198268 logs.go:282] 0 containers: []
	W0408 19:27:28.976018  198268 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:27:28.976024  198268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:27:28.976091  198268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:27:29.012032  198268 cri.go:89] found id: ""
	I0408 19:27:29.012061  198268 logs.go:282] 0 containers: []
	W0408 19:27:29.012072  198268 logs.go:284] No container was found matching "etcd"
	I0408 19:27:29.012082  198268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:27:29.012164  198268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:27:29.052580  198268 cri.go:89] found id: ""
	I0408 19:27:29.052612  198268 logs.go:282] 0 containers: []
	W0408 19:27:29.052622  198268 logs.go:284] No container was found matching "coredns"
	I0408 19:27:29.052631  198268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:27:29.052711  198268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:27:29.091349  198268 cri.go:89] found id: ""
	I0408 19:27:29.091383  198268 logs.go:282] 0 containers: []
	W0408 19:27:29.091395  198268 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:27:29.091404  198268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:27:29.091467  198268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:27:29.128088  198268 cri.go:89] found id: ""
	I0408 19:27:29.128120  198268 logs.go:282] 0 containers: []
	W0408 19:27:29.128128  198268 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:27:29.128134  198268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:27:29.128202  198268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:27:29.163947  198268 cri.go:89] found id: ""
	I0408 19:27:29.163984  198268 logs.go:282] 0 containers: []
	W0408 19:27:29.163995  198268 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:27:29.164004  198268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:27:29.164078  198268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:27:29.197717  198268 cri.go:89] found id: ""
	I0408 19:27:29.197754  198268 logs.go:282] 0 containers: []
	W0408 19:27:29.197766  198268 logs.go:284] No container was found matching "kindnet"
	I0408 19:27:29.197779  198268 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:27:29.197795  198268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:27:29.315828  198268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:27:29.315856  198268 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:27:29.315874  198268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:27:29.425555  198268 logs.go:123] Gathering logs for container status ...
	I0408 19:27:29.425608  198268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:27:29.466712  198268 logs.go:123] Gathering logs for kubelet ...
	I0408 19:27:29.466746  198268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:27:29.518983  198268 logs.go:123] Gathering logs for dmesg ...
	I0408 19:27:29.519028  198268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0408 19:27:29.536355  198268 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 19:27:29.536480  198268 out.go:270] * 
	* 
	W0408 19:27:29.536561  198268 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 19:27:29.536584  198268 out.go:270] * 
	* 
	W0408 19:27:29.537786  198268 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 19:27:29.541735  198268 out.go:201] 
	W0408 19:27:29.543405  198268 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 19:27:29.543473  198268 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 19:27:29.543512  198268 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 19:27:29.545289  198268 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-257500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 6 (258.616577ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 19:27:29.861148  204892 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-257500" does not appear in /home/jenkins/minikube-integration/20604-141129/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-257500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (275.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-257500 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-257500 create -f testdata/busybox.yaml: exit status 1 (51.786542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-257500" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-257500 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 6 (247.741622ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 19:27:30.162432  204931 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-257500" does not appear in /home/jenkins/minikube-integration/20604-141129/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-257500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 6 (247.034684ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 19:27:30.410466  204977 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-257500" does not appear in /home/jenkins/minikube-integration/20604-141129/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-257500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (85.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-257500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0408 19:27:31.043392  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:36.165586  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-257500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m24.92657014s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-257500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-257500 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-257500 describe deploy/metrics-server -n kube-system: exit status 1 (50.024124ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-257500" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-257500 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 6 (248.261067ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 19:28:55.636391  205795 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-257500" does not appear in /home/jenkins/minikube-integration/20604-141129/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-257500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (85.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (507.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-257500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0408 19:28:57.392641  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:58.034614  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:59.316480  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:29:01.878595  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:29:03.288644  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:29:05.671187  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:29:07.000349  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:29:17.241708  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:29:17.860138  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:29:37.723817  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:29:44.250265  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:30:09.773980  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:30:18.685728  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:30:19.901534  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:30:27.593077  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:30:39.644424  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:30:54.136108  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:31:06.172456  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:31:07.349942  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:31:21.838792  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:31:33.998841  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:31:40.608134  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:32:01.701813  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-257500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m26.292759652s)

                                                
                                                
-- stdout --
	* [old-k8s-version-257500] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20604
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-257500" primary control-plane node in "old-k8s-version-257500" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-257500" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 19:28:57.302806  205913 out.go:345] Setting OutFile to fd 1 ...
	I0408 19:28:57.302926  205913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:28:57.302940  205913 out.go:358] Setting ErrFile to fd 2...
	I0408 19:28:57.302947  205913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:28:57.303160  205913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 19:28:57.303779  205913 out.go:352] Setting JSON to false
	I0408 19:28:57.304883  205913 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11482,"bootTime":1744129055,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 19:28:57.305045  205913 start.go:139] virtualization: kvm guest
	I0408 19:28:57.307334  205913 out.go:177] * [old-k8s-version-257500] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 19:28:57.308692  205913 out.go:177]   - MINIKUBE_LOCATION=20604
	I0408 19:28:57.308690  205913 notify.go:220] Checking for updates...
	I0408 19:28:57.309946  205913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:28:57.311271  205913 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:28:57.312533  205913 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:28:57.313766  205913 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 19:28:57.315244  205913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:28:57.317215  205913 config.go:182] Loaded profile config "old-k8s-version-257500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 19:28:57.317701  205913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:28:57.317764  205913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:28:57.334745  205913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40939
	I0408 19:28:57.335252  205913 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:28:57.335763  205913 main.go:141] libmachine: Using API Version  1
	I0408 19:28:57.335790  205913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:28:57.336267  205913 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:28:57.336514  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:28:57.338612  205913 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0408 19:28:57.339953  205913 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 19:28:57.340327  205913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:28:57.340383  205913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:28:57.356761  205913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34637
	I0408 19:28:57.357240  205913 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:28:57.357655  205913 main.go:141] libmachine: Using API Version  1
	I0408 19:28:57.357695  205913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:28:57.358105  205913 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:28:57.358306  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:28:57.400868  205913 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 19:28:57.402389  205913 start.go:297] selected driver: kvm2
	I0408 19:28:57.402411  205913 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-257500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-257500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.192 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:28:57.402538  205913 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:28:57.403401  205913 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:28:57.403501  205913 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20604-141129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 19:28:57.420545  205913 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 19:28:57.421040  205913 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 19:28:57.421093  205913 cni.go:84] Creating CNI manager for ""
	I0408 19:28:57.421136  205913 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:28:57.421196  205913 start.go:340] cluster config:
	{Name:old-k8s-version-257500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-257500 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.192 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:28:57.421341  205913 iso.go:125] acquiring lock: {Name:mk6f89956dcd0ccd06b3c273592988c0e077c69a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:28:57.424506  205913 out.go:177] * Starting "old-k8s-version-257500" primary control-plane node in "old-k8s-version-257500" cluster
	I0408 19:28:57.425973  205913 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 19:28:57.426038  205913 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 19:28:57.426052  205913 cache.go:56] Caching tarball of preloaded images
	I0408 19:28:57.426155  205913 preload.go:172] Found /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 19:28:57.426170  205913 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 19:28:57.426319  205913 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/config.json ...
	I0408 19:28:57.426592  205913 start.go:360] acquireMachinesLock for old-k8s-version-257500: {Name:mk9f7a747fe5c51efa93431b771c455683360918 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 19:28:57.426653  205913 start.go:364] duration metric: took 34.31µs to acquireMachinesLock for "old-k8s-version-257500"
	I0408 19:28:57.426673  205913 start.go:96] Skipping create...Using existing machine configuration
	I0408 19:28:57.426683  205913 fix.go:54] fixHost starting: 
	I0408 19:28:57.427103  205913 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:28:57.427136  205913 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:28:57.443734  205913 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41625
	I0408 19:28:57.444319  205913 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:28:57.444843  205913 main.go:141] libmachine: Using API Version  1
	I0408 19:28:57.444868  205913 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:28:57.445292  205913 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:28:57.445531  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:28:57.445724  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetState
	I0408 19:28:57.447879  205913 fix.go:112] recreateIfNeeded on old-k8s-version-257500: state=Stopped err=<nil>
	I0408 19:28:57.447915  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	W0408 19:28:57.448103  205913 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 19:28:57.450231  205913 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-257500" ...
	I0408 19:28:57.451872  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .Start
	I0408 19:28:57.452196  205913 main.go:141] libmachine: (old-k8s-version-257500) starting domain...
	I0408 19:28:57.452217  205913 main.go:141] libmachine: (old-k8s-version-257500) ensuring networks are active...
	I0408 19:28:57.453217  205913 main.go:141] libmachine: (old-k8s-version-257500) Ensuring network default is active
	I0408 19:28:57.453649  205913 main.go:141] libmachine: (old-k8s-version-257500) Ensuring network mk-old-k8s-version-257500 is active
	I0408 19:28:57.454266  205913 main.go:141] libmachine: (old-k8s-version-257500) getting domain XML...
	I0408 19:28:57.455336  205913 main.go:141] libmachine: (old-k8s-version-257500) creating domain...
	I0408 19:28:58.787823  205913 main.go:141] libmachine: (old-k8s-version-257500) waiting for IP...
	I0408 19:28:58.788970  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:28:58.789535  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:28:58.789675  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:28:58.789545  205950 retry.go:31] will retry after 259.392594ms: waiting for domain to come up
	I0408 19:28:59.051289  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:28:59.051854  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:28:59.051887  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:28:59.051813  205950 retry.go:31] will retry after 240.175181ms: waiting for domain to come up
	I0408 19:28:59.293473  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:28:59.294059  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:28:59.294118  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:28:59.294033  205950 retry.go:31] will retry after 357.32091ms: waiting for domain to come up
	I0408 19:28:59.652545  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:28:59.653148  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:28:59.653176  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:28:59.653125  205950 retry.go:31] will retry after 393.327622ms: waiting for domain to come up
	I0408 19:29:00.048042  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:00.048636  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:29:00.048677  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:29:00.048625  205950 retry.go:31] will retry after 507.811148ms: waiting for domain to come up
	I0408 19:29:00.558005  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:00.558705  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:29:00.558736  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:29:00.558633  205950 retry.go:31] will retry after 914.136159ms: waiting for domain to come up
	I0408 19:29:01.474960  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:01.475798  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:29:01.475831  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:29:01.475763  205950 retry.go:31] will retry after 788.44621ms: waiting for domain to come up
	I0408 19:29:02.265582  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:02.266214  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:29:02.266245  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:29:02.266179  205950 retry.go:31] will retry after 1.106556715s: waiting for domain to come up
	I0408 19:29:03.374249  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:03.374797  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:29:03.374825  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:29:03.374746  205950 retry.go:31] will retry after 1.714015579s: waiting for domain to come up
	I0408 19:29:05.090483  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:05.091111  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:29:05.091145  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:29:05.091049  205950 retry.go:31] will retry after 1.894322596s: waiting for domain to come up
	I0408 19:29:06.987303  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:06.987919  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:29:06.987951  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:29:06.987883  205950 retry.go:31] will retry after 2.670630376s: waiting for domain to come up
	I0408 19:29:09.659778  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:09.660302  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:29:09.660358  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:29:09.660263  205950 retry.go:31] will retry after 2.49614905s: waiting for domain to come up
	I0408 19:29:12.159273  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:12.159862  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | unable to find current IP address of domain old-k8s-version-257500 in network mk-old-k8s-version-257500
	I0408 19:29:12.159900  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | I0408 19:29:12.159809  205950 retry.go:31] will retry after 4.286824233s: waiting for domain to come up
	I0408 19:29:16.449105  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.449747  205913 main.go:141] libmachine: (old-k8s-version-257500) found domain IP: 192.168.39.192
	I0408 19:29:16.449765  205913 main.go:141] libmachine: (old-k8s-version-257500) reserving static IP address...
	I0408 19:29:16.449803  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has current primary IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.450375  205913 main.go:141] libmachine: (old-k8s-version-257500) reserved static IP address 192.168.39.192 for domain old-k8s-version-257500
	I0408 19:29:16.450415  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "old-k8s-version-257500", mac: "52:54:00:00:35:99", ip: "192.168.39.192"} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:16.450431  205913 main.go:141] libmachine: (old-k8s-version-257500) waiting for SSH...
	I0408 19:29:16.450458  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | skip adding static IP to network mk-old-k8s-version-257500 - found existing host DHCP lease matching {name: "old-k8s-version-257500", mac: "52:54:00:00:35:99", ip: "192.168.39.192"}
	I0408 19:29:16.450475  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | Getting to WaitForSSH function...
	I0408 19:29:16.452587  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.452998  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:16.453028  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.453190  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | Using SSH client type: external
	I0408 19:29:16.453220  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | Using SSH private key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa (-rw-------)
	I0408 19:29:16.453279  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 19:29:16.453299  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | About to run SSH command:
	I0408 19:29:16.453341  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | exit 0
	I0408 19:29:16.577964  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | SSH cmd err, output: <nil>: 
	I0408 19:29:16.578326  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetConfigRaw
	I0408 19:29:16.578922  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetIP
	I0408 19:29:16.581769  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.582223  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:16.582251  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.582567  205913 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/config.json ...
	I0408 19:29:16.582784  205913 machine.go:93] provisionDockerMachine start ...
	I0408 19:29:16.582804  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:29:16.583062  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:29:16.585602  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.586006  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:16.586040  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.586235  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:29:16.586441  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:16.586642  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:16.586781  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:29:16.586949  205913 main.go:141] libmachine: Using SSH client type: native
	I0408 19:29:16.587325  205913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0408 19:29:16.587341  205913 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 19:29:16.694574  205913 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 19:29:16.694603  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetMachineName
	I0408 19:29:16.694951  205913 buildroot.go:166] provisioning hostname "old-k8s-version-257500"
	I0408 19:29:16.694996  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetMachineName
	I0408 19:29:16.695257  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:29:16.698219  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.698598  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:16.698637  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.698835  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:29:16.699045  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:16.699222  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:16.699404  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:29:16.699575  205913 main.go:141] libmachine: Using SSH client type: native
	I0408 19:29:16.699881  205913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0408 19:29:16.699900  205913 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-257500 && echo "old-k8s-version-257500" | sudo tee /etc/hostname
	I0408 19:29:16.818035  205913 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-257500
	
	I0408 19:29:16.818064  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:29:16.821456  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.822032  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:16.822074  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.822276  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:29:16.822595  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:16.822801  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:16.822976  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:29:16.823273  205913 main.go:141] libmachine: Using SSH client type: native
	I0408 19:29:16.823578  205913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0408 19:29:16.823607  205913 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-257500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-257500/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-257500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 19:29:16.941149  205913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:29:16.941207  205913 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20604-141129/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-141129/.minikube}
	I0408 19:29:16.941238  205913 buildroot.go:174] setting up certificates
	I0408 19:29:16.941253  205913 provision.go:84] configureAuth start
	I0408 19:29:16.941267  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetMachineName
	I0408 19:29:16.941640  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetIP
	I0408 19:29:16.944578  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.944972  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:16.944989  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.945254  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:29:16.948318  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.948738  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:16.948775  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:16.949094  205913 provision.go:143] copyHostCerts
	I0408 19:29:16.949152  205913 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem, removing ...
	I0408 19:29:16.949161  205913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem
	I0408 19:29:16.949263  205913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem (1082 bytes)
	I0408 19:29:16.949376  205913 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem, removing ...
	I0408 19:29:16.949387  205913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem
	I0408 19:29:16.949415  205913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem (1123 bytes)
	I0408 19:29:16.949468  205913 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem, removing ...
	I0408 19:29:16.949478  205913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem
	I0408 19:29:16.949500  205913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem (1679 bytes)
	I0408 19:29:16.949549  205913 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-257500 san=[127.0.0.1 192.168.39.192 localhost minikube old-k8s-version-257500]
	I0408 19:29:17.211930  205913 provision.go:177] copyRemoteCerts
	I0408 19:29:17.211996  205913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 19:29:17.212024  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:29:17.215266  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.215637  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:17.215682  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.215856  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:29:17.216118  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:17.216298  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:29:17.216438  205913 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa Username:docker}
	I0408 19:29:17.296839  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 19:29:17.323027  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 19:29:17.349064  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 19:29:17.376371  205913 provision.go:87] duration metric: took 435.102175ms to configureAuth
	I0408 19:29:17.376409  205913 buildroot.go:189] setting minikube options for container-runtime
	I0408 19:29:17.376615  205913 config.go:182] Loaded profile config "old-k8s-version-257500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 19:29:17.376722  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:29:17.379803  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.380248  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:17.380281  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.380596  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:29:17.380836  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:17.381043  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:17.381204  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:29:17.381379  205913 main.go:141] libmachine: Using SSH client type: native
	I0408 19:29:17.381600  205913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0408 19:29:17.381619  205913 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 19:29:17.625508  205913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 19:29:17.625598  205913 machine.go:96] duration metric: took 1.042797289s to provisionDockerMachine
	I0408 19:29:17.625616  205913 start.go:293] postStartSetup for "old-k8s-version-257500" (driver="kvm2")
	I0408 19:29:17.625631  205913 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 19:29:17.625668  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:29:17.626019  205913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 19:29:17.626048  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:29:17.629165  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.629563  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:17.629592  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.629862  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:29:17.630065  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:17.630230  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:29:17.630362  205913 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa Username:docker}
	I0408 19:29:17.713159  205913 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 19:29:17.717324  205913 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 19:29:17.717354  205913 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/addons for local assets ...
	I0408 19:29:17.717427  205913 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/files for local assets ...
	I0408 19:29:17.717526  205913 filesync.go:149] local asset: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem -> 1484872.pem in /etc/ssl/certs
	I0408 19:29:17.717657  205913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 19:29:17.728893  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:29:17.755134  205913 start.go:296] duration metric: took 129.493744ms for postStartSetup
	I0408 19:29:17.755217  205913 fix.go:56] duration metric: took 20.328517692s for fixHost
	I0408 19:29:17.755342  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:29:17.758293  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.758609  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:17.758657  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.758815  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:29:17.759053  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:17.759226  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:17.759351  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:29:17.759504  205913 main.go:141] libmachine: Using SSH client type: native
	I0408 19:29:17.759755  205913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I0408 19:29:17.759768  205913 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 19:29:17.866939  205913 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744140557.839562626
	
	I0408 19:29:17.866980  205913 fix.go:216] guest clock: 1744140557.839562626
	I0408 19:29:17.866988  205913 fix.go:229] Guest: 2025-04-08 19:29:17.839562626 +0000 UTC Remote: 2025-04-08 19:29:17.755306961 +0000 UTC m=+20.493587029 (delta=84.255665ms)
	I0408 19:29:17.867009  205913 fix.go:200] guest clock delta is within tolerance: 84.255665ms
	I0408 19:29:17.867015  205913 start.go:83] releasing machines lock for "old-k8s-version-257500", held for 20.44034988s
	I0408 19:29:17.867033  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:29:17.867343  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetIP
	I0408 19:29:17.870888  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.871301  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:17.871331  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.871532  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:29:17.872199  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:29:17.872413  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .DriverName
	I0408 19:29:17.872516  205913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 19:29:17.872578  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:29:17.872729  205913 ssh_runner.go:195] Run: cat /version.json
	I0408 19:29:17.872766  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHHostname
	I0408 19:29:17.875861  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.875895  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.876343  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:17.876375  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.876428  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:17.876457  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:17.876544  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:29:17.876717  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHPort
	I0408 19:29:17.876800  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:17.876875  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHKeyPath
	I0408 19:29:17.876957  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:29:17.877024  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetSSHUsername
	I0408 19:29:17.877168  205913 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa Username:docker}
	I0408 19:29:17.877171  205913 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/old-k8s-version-257500/id_rsa Username:docker}
	I0408 19:29:17.955112  205913 ssh_runner.go:195] Run: systemctl --version
	I0408 19:29:17.975105  205913 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 19:29:18.123787  205913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 19:29:18.129870  205913 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 19:29:18.129957  205913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 19:29:18.148348  205913 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 19:29:18.148381  205913 start.go:495] detecting cgroup driver to use...
	I0408 19:29:18.148479  205913 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 19:29:18.165439  205913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 19:29:18.180333  205913 docker.go:217] disabling cri-docker service (if available) ...
	I0408 19:29:18.180396  205913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 19:29:18.195617  205913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 19:29:18.209849  205913 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 19:29:18.330823  205913 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 19:29:18.502676  205913 docker.go:233] disabling docker service ...
	I0408 19:29:18.502754  205913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 19:29:18.518427  205913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 19:29:18.532241  205913 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 19:29:18.648569  205913 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 19:29:18.774157  205913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 19:29:18.788957  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 19:29:18.808538  205913 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0408 19:29:18.808619  205913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:29:18.820556  205913 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 19:29:18.820637  205913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:29:18.831943  205913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:29:18.844705  205913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:29:18.857221  205913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 19:29:18.870098  205913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 19:29:18.883826  205913 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 19:29:18.883900  205913 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 19:29:18.900067  205913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 19:29:18.911577  205913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:29:19.048953  205913 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 19:29:19.148990  205913 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 19:29:19.149077  205913 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 19:29:19.158479  205913 start.go:563] Will wait 60s for crictl version
	I0408 19:29:19.158552  205913 ssh_runner.go:195] Run: which crictl
	I0408 19:29:19.162943  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 19:29:19.199010  205913 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 19:29:19.199109  205913 ssh_runner.go:195] Run: crio --version
	I0408 19:29:19.229368  205913 ssh_runner.go:195] Run: crio --version
	I0408 19:29:19.260492  205913 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0408 19:29:19.261858  205913 main.go:141] libmachine: (old-k8s-version-257500) Calling .GetIP
	I0408 19:29:19.265125  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:19.265568  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:35:99", ip: ""} in network mk-old-k8s-version-257500: {Iface:virbr1 ExpiryTime:2025-04-08 20:23:15 +0000 UTC Type:0 Mac:52:54:00:00:35:99 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:old-k8s-version-257500 Clientid:01:52:54:00:00:35:99}
	I0408 19:29:19.265602  205913 main.go:141] libmachine: (old-k8s-version-257500) DBG | domain old-k8s-version-257500 has defined IP address 192.168.39.192 and MAC address 52:54:00:00:35:99 in network mk-old-k8s-version-257500
	I0408 19:29:19.265917  205913 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 19:29:19.271314  205913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:29:19.285925  205913 kubeadm.go:883] updating cluster {Name:old-k8s-version-257500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-257500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.192 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 19:29:19.286074  205913 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 19:29:19.286143  205913 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:29:19.332023  205913 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 19:29:19.332135  205913 ssh_runner.go:195] Run: which lz4
	I0408 19:29:19.336151  205913 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 19:29:19.340605  205913 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 19:29:19.340651  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0408 19:29:20.954680  205913 crio.go:462] duration metric: took 1.618575827s to copy over tarball
	I0408 19:29:20.954758  205913 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 19:29:24.152914  205913 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.198122768s)
	I0408 19:29:24.152948  205913 crio.go:469] duration metric: took 3.198236943s to extract the tarball
	I0408 19:29:24.152958  205913 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 19:29:24.195648  205913 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:29:24.230963  205913 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 19:29:24.231003  205913 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 19:29:24.231099  205913 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:29:24.231135  205913 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:29:24.231157  205913 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0408 19:29:24.231113  205913 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:29:24.231120  205913 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0408 19:29:24.231188  205913 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:29:24.231188  205913 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:29:24.231332  205913 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0408 19:29:24.233155  205913 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:29:24.233183  205913 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0408 19:29:24.233261  205913 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:29:24.233413  205913 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0408 19:29:24.233433  205913 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:29:24.233457  205913 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:29:24.233466  205913 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:29:24.233504  205913 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0408 19:29:24.379103  205913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:29:24.380487  205913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:29:24.381076  205913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:29:24.383974  205913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0408 19:29:24.388584  205913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:29:24.398505  205913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0408 19:29:24.409601  205913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0408 19:29:24.498133  205913 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0408 19:29:24.498198  205913 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:29:24.498259  205913 ssh_runner.go:195] Run: which crictl
	I0408 19:29:24.536449  205913 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0408 19:29:24.536510  205913 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:29:24.536567  205913 ssh_runner.go:195] Run: which crictl
	I0408 19:29:24.555168  205913 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0408 19:29:24.555226  205913 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:29:24.555280  205913 ssh_runner.go:195] Run: which crictl
	I0408 19:29:24.567468  205913 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0408 19:29:24.567564  205913 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0408 19:29:24.567485  205913 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0408 19:29:24.567609  205913 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:29:24.567633  205913 ssh_runner.go:195] Run: which crictl
	I0408 19:29:24.567661  205913 ssh_runner.go:195] Run: which crictl
	I0408 19:29:24.573427  205913 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0408 19:29:24.573480  205913 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0408 19:29:24.573525  205913 ssh_runner.go:195] Run: which crictl
	I0408 19:29:24.585699  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:29:24.585761  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:29:24.585779  205913 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0408 19:29:24.585792  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 19:29:24.585761  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:29:24.585829  205913 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0408 19:29:24.585960  205913 ssh_runner.go:195] Run: which crictl
	I0408 19:29:24.585829  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:29:24.585854  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 19:29:24.742008  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 19:29:24.742051  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:29:24.742070  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:29:24.742100  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 19:29:24.742134  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 19:29:24.742224  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:29:24.742321  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:29:24.904805  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 19:29:24.904899  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 19:29:24.904841  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 19:29:24.904848  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 19:29:24.930286  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 19:29:24.930333  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 19:29:24.930333  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 19:29:25.030713  205913 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0408 19:29:25.030713  205913 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0408 19:29:25.067875  205913 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0408 19:29:25.067980  205913 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 19:29:25.084463  205913 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0408 19:29:25.084517  205913 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0408 19:29:25.084525  205913 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0408 19:29:25.115598  205913 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0408 19:29:25.942451  205913 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:29:26.090019  205913 cache_images.go:92] duration metric: took 1.858992172s to LoadCachedImages
	W0408 19:29:26.090121  205913 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20604-141129/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0408 19:29:26.090137  205913 kubeadm.go:934] updating node { 192.168.39.192 8443 v1.20.0 crio true true} ...
	I0408 19:29:26.090244  205913 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-257500 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-257500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 19:29:26.090371  205913 ssh_runner.go:195] Run: crio config
	I0408 19:29:26.141488  205913 cni.go:84] Creating CNI manager for ""
	I0408 19:29:26.141513  205913 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:29:26.141525  205913 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 19:29:26.141544  205913 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.192 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-257500 NodeName:old-k8s-version-257500 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.192"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 19:29:26.141662  205913 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-257500"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.192
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.192"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 19:29:26.141793  205913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 19:29:26.152042  205913 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 19:29:26.152105  205913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 19:29:26.162198  205913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0408 19:29:26.180490  205913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 19:29:26.198464  205913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0408 19:29:26.216253  205913 ssh_runner.go:195] Run: grep 192.168.39.192	control-plane.minikube.internal$ /etc/hosts
	I0408 19:29:26.220593  205913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.192	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:29:26.233384  205913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:29:26.347499  205913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:29:26.365310  205913 certs.go:68] Setting up /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500 for IP: 192.168.39.192
	I0408 19:29:26.365338  205913 certs.go:194] generating shared ca certs ...
	I0408 19:29:26.365356  205913 certs.go:226] acquiring lock for ca certs: {Name:mkd37ce74a5e6f5f5300314397402f7d571fc230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:29:26.365515  205913 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key
	I0408 19:29:26.365559  205913 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key
	I0408 19:29:26.365569  205913 certs.go:256] generating profile certs ...
	I0408 19:29:26.365657  205913 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/client.key
	I0408 19:29:26.365702  205913 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.key.31857a68
	I0408 19:29:26.365738  205913 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/proxy-client.key
	I0408 19:29:26.365887  205913 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem (1338 bytes)
	W0408 19:29:26.365920  205913 certs.go:480] ignoring /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487_empty.pem, impossibly tiny 0 bytes
	I0408 19:29:26.365929  205913 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 19:29:26.365963  205913 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem (1082 bytes)
	I0408 19:29:26.366008  205913 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem (1123 bytes)
	I0408 19:29:26.366041  205913 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem (1679 bytes)
	I0408 19:29:26.366097  205913 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:29:26.366899  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 19:29:26.402101  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 19:29:26.430429  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 19:29:26.457388  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 19:29:26.486500  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 19:29:26.517194  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 19:29:26.552798  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 19:29:26.589100  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/old-k8s-version-257500/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 19:29:26.628539  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 19:29:26.655446  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem --> /usr/share/ca-certificates/148487.pem (1338 bytes)
	I0408 19:29:26.682191  205913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /usr/share/ca-certificates/1484872.pem (1708 bytes)
	I0408 19:29:26.709373  205913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 19:29:26.728714  205913 ssh_runner.go:195] Run: openssl version
	I0408 19:29:26.734798  205913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 19:29:26.745757  205913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:29:26.750186  205913 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:29:26.750269  205913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:29:26.756304  205913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 19:29:26.768197  205913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148487.pem && ln -fs /usr/share/ca-certificates/148487.pem /etc/ssl/certs/148487.pem"
	I0408 19:29:26.781188  205913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148487.pem
	I0408 19:29:26.786353  205913 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 18:21 /usr/share/ca-certificates/148487.pem
	I0408 19:29:26.786437  205913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148487.pem
	I0408 19:29:26.792714  205913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/148487.pem /etc/ssl/certs/51391683.0"
	I0408 19:29:26.804371  205913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1484872.pem && ln -fs /usr/share/ca-certificates/1484872.pem /etc/ssl/certs/1484872.pem"
	I0408 19:29:26.815894  205913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1484872.pem
	I0408 19:29:26.821374  205913 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 18:21 /usr/share/ca-certificates/1484872.pem
	I0408 19:29:26.821448  205913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1484872.pem
	I0408 19:29:26.827707  205913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1484872.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 19:29:26.839897  205913 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 19:29:26.844978  205913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 19:29:26.851733  205913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 19:29:26.858333  205913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 19:29:26.865074  205913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 19:29:26.871497  205913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 19:29:26.877752  205913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 19:29:26.884372  205913 kubeadm.go:392] StartCluster: {Name:old-k8s-version-257500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-257500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.192 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:29:26.884461  205913 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 19:29:26.884518  205913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:29:26.922777  205913 cri.go:89] found id: ""
	I0408 19:29:26.922848  205913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 19:29:26.933052  205913 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0408 19:29:26.933097  205913 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0408 19:29:26.933154  205913 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 19:29:26.948610  205913 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 19:29:26.949595  205913 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-257500" does not appear in /home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:29:26.950324  205913 kubeconfig.go:62] /home/jenkins/minikube-integration/20604-141129/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-257500" cluster setting kubeconfig missing "old-k8s-version-257500" context setting]
	I0408 19:29:26.951722  205913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/kubeconfig: {Name:mk9a380edcf1115627e95ec52acade4ebe48201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:29:26.954541  205913 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 19:29:26.965220  205913 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.192
	I0408 19:29:26.965264  205913 kubeadm.go:1160] stopping kube-system containers ...
	I0408 19:29:26.965277  205913 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 19:29:26.965329  205913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:29:27.008386  205913 cri.go:89] found id: ""
	I0408 19:29:27.008469  205913 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 19:29:27.026543  205913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:29:27.036731  205913 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:29:27.036761  205913 kubeadm.go:157] found existing configuration files:
	
	I0408 19:29:27.036822  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:29:27.047352  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:29:27.047414  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:29:27.058429  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:29:27.068779  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:29:27.068862  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:29:27.080191  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:29:27.090478  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:29:27.090541  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:29:27.100560  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:29:27.110345  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:29:27.110417  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:29:27.120886  205913 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 19:29:27.133621  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:29:27.416327  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:29:28.005963  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:29:28.242616  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:29:28.345118  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:29:28.434120  205913 api_server.go:52] waiting for apiserver process to appear ...
	I0408 19:29:28.434198  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:28.934935  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:29.435086  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:29.935138  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:30.434784  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:30.935123  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:31.435055  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:31.935101  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:32.435242  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:32.934750  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:33.434625  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:33.934606  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:34.434306  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:34.934521  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:35.435330  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:35.935184  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:36.435076  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:36.934736  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:37.434523  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:37.934394  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:38.435189  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:38.935145  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:39.435158  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:39.934711  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:40.435088  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:40.934662  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:41.435095  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:41.934577  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:42.434986  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:42.935144  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:43.435100  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:43.935158  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:44.435126  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:44.935191  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:45.434781  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:45.934775  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:46.435106  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:46.935115  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:47.434828  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:47.934374  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:48.434382  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:48.935084  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:49.435096  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:49.935029  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:50.435154  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:50.934701  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:51.435055  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:51.934651  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:52.435110  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:52.934675  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:53.435113  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:53.934416  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:54.434472  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:54.935086  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:55.434806  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:55.934383  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:56.435065  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:56.935251  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:57.434639  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:57.935170  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:58.434339  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:58.934717  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:59.434721  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:29:59.935150  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:00.435056  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:00.935298  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:01.435214  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:01.935050  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:02.434989  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:02.934475  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:03.434373  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:03.935107  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:04.435130  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:04.935115  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:05.435129  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:05.934744  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:06.435138  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:06.934736  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:07.434394  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:07.934988  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:08.434465  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:08.934995  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:09.435170  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:09.935138  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:10.434465  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:10.934875  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:11.435080  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:11.935108  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:12.435022  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:12.934347  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:13.434332  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:13.935099  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:14.435047  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:14.935106  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:15.434933  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:15.934370  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:16.434660  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:16.934359  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:17.434318  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:17.934997  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:18.435142  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:18.935107  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:19.434963  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:19.934469  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:20.434511  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:20.935120  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:21.435271  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:21.934543  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:22.434367  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:22.934675  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:23.434691  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:23.934295  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:24.435078  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:24.935102  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:25.434569  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:25.934537  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:26.434459  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:26.934347  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:27.434523  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:27.934936  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:28.435098  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:30:28.435192  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:30:28.475355  205913 cri.go:89] found id: ""
	I0408 19:30:28.475383  205913 logs.go:282] 0 containers: []
	W0408 19:30:28.475393  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:30:28.475401  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:30:28.475467  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:30:28.512760  205913 cri.go:89] found id: ""
	I0408 19:30:28.512798  205913 logs.go:282] 0 containers: []
	W0408 19:30:28.512810  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:30:28.512831  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:30:28.512910  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:30:28.548238  205913 cri.go:89] found id: ""
	I0408 19:30:28.548270  205913 logs.go:282] 0 containers: []
	W0408 19:30:28.548280  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:30:28.548290  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:30:28.548361  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:30:28.582594  205913 cri.go:89] found id: ""
	I0408 19:30:28.582630  205913 logs.go:282] 0 containers: []
	W0408 19:30:28.582642  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:30:28.582652  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:30:28.582714  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:30:28.617816  205913 cri.go:89] found id: ""
	I0408 19:30:28.617866  205913 logs.go:282] 0 containers: []
	W0408 19:30:28.617878  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:30:28.617886  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:30:28.617961  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:30:28.653942  205913 cri.go:89] found id: ""
	I0408 19:30:28.653976  205913 logs.go:282] 0 containers: []
	W0408 19:30:28.653984  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:30:28.653991  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:30:28.654045  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:30:28.688709  205913 cri.go:89] found id: ""
	I0408 19:30:28.688739  205913 logs.go:282] 0 containers: []
	W0408 19:30:28.688748  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:30:28.688754  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:30:28.688807  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:30:28.725436  205913 cri.go:89] found id: ""
	I0408 19:30:28.725468  205913 logs.go:282] 0 containers: []
	W0408 19:30:28.725476  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:30:28.725486  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:30:28.725497  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:30:28.802510  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:30:28.802555  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:30:28.842081  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:30:28.842109  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:30:28.897729  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:30:28.897797  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:30:28.912527  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:30:28.912568  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:30:29.056134  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:30:31.557609  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:31.577069  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:30:31.577151  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:30:31.626028  205913 cri.go:89] found id: ""
	I0408 19:30:31.626057  205913 logs.go:282] 0 containers: []
	W0408 19:30:31.626068  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:30:31.626076  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:30:31.626138  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:30:31.683249  205913 cri.go:89] found id: ""
	I0408 19:30:31.683277  205913 logs.go:282] 0 containers: []
	W0408 19:30:31.683285  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:30:31.683292  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:30:31.683356  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:30:31.725786  205913 cri.go:89] found id: ""
	I0408 19:30:31.725816  205913 logs.go:282] 0 containers: []
	W0408 19:30:31.725827  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:30:31.725847  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:30:31.725939  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:30:31.761774  205913 cri.go:89] found id: ""
	I0408 19:30:31.761806  205913 logs.go:282] 0 containers: []
	W0408 19:30:31.761817  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:30:31.761825  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:30:31.761909  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:30:31.801352  205913 cri.go:89] found id: ""
	I0408 19:30:31.801389  205913 logs.go:282] 0 containers: []
	W0408 19:30:31.801400  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:30:31.801408  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:30:31.801473  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:30:31.836999  205913 cri.go:89] found id: ""
	I0408 19:30:31.837027  205913 logs.go:282] 0 containers: []
	W0408 19:30:31.837038  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:30:31.837046  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:30:31.837112  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:30:31.875004  205913 cri.go:89] found id: ""
	I0408 19:30:31.875039  205913 logs.go:282] 0 containers: []
	W0408 19:30:31.875052  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:30:31.875060  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:30:31.875131  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:30:31.911407  205913 cri.go:89] found id: ""
	I0408 19:30:31.911439  205913 logs.go:282] 0 containers: []
	W0408 19:30:31.911447  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:30:31.911459  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:30:31.911471  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:30:31.964064  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:30:31.964110  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:30:31.979769  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:30:31.979802  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:30:32.052544  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:30:32.052566  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:30:32.052582  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:30:32.130486  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:30:32.130520  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:30:34.678554  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:34.692886  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:30:34.692956  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:30:34.729621  205913 cri.go:89] found id: ""
	I0408 19:30:34.729659  205913 logs.go:282] 0 containers: []
	W0408 19:30:34.729674  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:30:34.729683  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:30:34.729754  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:30:34.768702  205913 cri.go:89] found id: ""
	I0408 19:30:34.768738  205913 logs.go:282] 0 containers: []
	W0408 19:30:34.768749  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:30:34.768756  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:30:34.768821  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:30:34.807064  205913 cri.go:89] found id: ""
	I0408 19:30:34.807091  205913 logs.go:282] 0 containers: []
	W0408 19:30:34.807098  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:30:34.807112  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:30:34.807182  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:30:34.846748  205913 cri.go:89] found id: ""
	I0408 19:30:34.846776  205913 logs.go:282] 0 containers: []
	W0408 19:30:34.846784  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:30:34.846790  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:30:34.846853  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:30:34.886697  205913 cri.go:89] found id: ""
	I0408 19:30:34.886725  205913 logs.go:282] 0 containers: []
	W0408 19:30:34.886735  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:30:34.886740  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:30:34.886801  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:30:34.927160  205913 cri.go:89] found id: ""
	I0408 19:30:34.927192  205913 logs.go:282] 0 containers: []
	W0408 19:30:34.927201  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:30:34.927208  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:30:34.927269  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:30:34.964482  205913 cri.go:89] found id: ""
	I0408 19:30:34.964517  205913 logs.go:282] 0 containers: []
	W0408 19:30:34.964525  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:30:34.964531  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:30:34.964584  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:30:35.002199  205913 cri.go:89] found id: ""
	I0408 19:30:35.002231  205913 logs.go:282] 0 containers: []
	W0408 19:30:35.002242  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:30:35.002256  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:30:35.002275  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:30:35.047063  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:30:35.047094  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:30:35.102733  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:30:35.102783  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:30:35.117799  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:30:35.117846  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:30:35.207680  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:30:35.207710  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:30:35.207724  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:30:37.791820  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:37.805146  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:30:37.805231  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:30:37.841524  205913 cri.go:89] found id: ""
	I0408 19:30:37.841551  205913 logs.go:282] 0 containers: []
	W0408 19:30:37.841559  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:30:37.841565  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:30:37.841615  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:30:37.875139  205913 cri.go:89] found id: ""
	I0408 19:30:37.875175  205913 logs.go:282] 0 containers: []
	W0408 19:30:37.875196  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:30:37.875204  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:30:37.875276  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:30:37.912111  205913 cri.go:89] found id: ""
	I0408 19:30:37.912179  205913 logs.go:282] 0 containers: []
	W0408 19:30:37.912195  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:30:37.912205  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:30:37.912284  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:30:37.951098  205913 cri.go:89] found id: ""
	I0408 19:30:37.951129  205913 logs.go:282] 0 containers: []
	W0408 19:30:37.951140  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:30:37.951148  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:30:37.951219  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:30:37.992059  205913 cri.go:89] found id: ""
	I0408 19:30:37.992095  205913 logs.go:282] 0 containers: []
	W0408 19:30:37.992106  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:30:37.992118  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:30:37.992198  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:30:38.028672  205913 cri.go:89] found id: ""
	I0408 19:30:38.028700  205913 logs.go:282] 0 containers: []
	W0408 19:30:38.028708  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:30:38.028716  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:30:38.028790  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:30:38.064389  205913 cri.go:89] found id: ""
	I0408 19:30:38.064425  205913 logs.go:282] 0 containers: []
	W0408 19:30:38.064437  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:30:38.064445  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:30:38.064515  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:30:38.103962  205913 cri.go:89] found id: ""
	I0408 19:30:38.104000  205913 logs.go:282] 0 containers: []
	W0408 19:30:38.104012  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:30:38.104025  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:30:38.104039  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:30:38.187056  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:30:38.187103  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:30:38.229000  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:30:38.229033  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:30:38.280369  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:30:38.280414  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:30:38.294545  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:30:38.294590  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:30:38.366734  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:30:40.867085  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:40.882032  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:30:40.882099  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:30:40.919043  205913 cri.go:89] found id: ""
	I0408 19:30:40.919073  205913 logs.go:282] 0 containers: []
	W0408 19:30:40.919084  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:30:40.919092  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:30:40.919154  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:30:40.960228  205913 cri.go:89] found id: ""
	I0408 19:30:40.960256  205913 logs.go:282] 0 containers: []
	W0408 19:30:40.960267  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:30:40.960279  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:30:40.960354  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:30:40.996709  205913 cri.go:89] found id: ""
	I0408 19:30:40.996738  205913 logs.go:282] 0 containers: []
	W0408 19:30:40.996746  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:30:40.996753  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:30:40.996814  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:30:41.030754  205913 cri.go:89] found id: ""
	I0408 19:30:41.030826  205913 logs.go:282] 0 containers: []
	W0408 19:30:41.030871  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:30:41.030889  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:30:41.030965  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:30:41.072129  205913 cri.go:89] found id: ""
	I0408 19:30:41.072171  205913 logs.go:282] 0 containers: []
	W0408 19:30:41.072185  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:30:41.072206  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:30:41.072268  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:30:41.112159  205913 cri.go:89] found id: ""
	I0408 19:30:41.112190  205913 logs.go:282] 0 containers: []
	W0408 19:30:41.112216  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:30:41.112225  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:30:41.112285  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:30:41.150982  205913 cri.go:89] found id: ""
	I0408 19:30:41.151016  205913 logs.go:282] 0 containers: []
	W0408 19:30:41.151025  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:30:41.151031  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:30:41.151125  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:30:41.186267  205913 cri.go:89] found id: ""
	I0408 19:30:41.186294  205913 logs.go:282] 0 containers: []
	W0408 19:30:41.186302  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:30:41.186311  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:30:41.186324  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:30:41.238442  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:30:41.238484  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:30:41.252086  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:30:41.252122  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:30:41.322624  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:30:41.322654  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:30:41.322668  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:30:41.404460  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:30:41.404508  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:30:43.945499  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:43.961122  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:30:43.961188  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:30:44.000614  205913 cri.go:89] found id: ""
	I0408 19:30:44.000644  205913 logs.go:282] 0 containers: []
	W0408 19:30:44.000652  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:30:44.000658  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:30:44.000711  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:30:44.040354  205913 cri.go:89] found id: ""
	I0408 19:30:44.040390  205913 logs.go:282] 0 containers: []
	W0408 19:30:44.040401  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:30:44.040408  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:30:44.040462  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:30:44.083775  205913 cri.go:89] found id: ""
	I0408 19:30:44.083805  205913 logs.go:282] 0 containers: []
	W0408 19:30:44.083816  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:30:44.083825  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:30:44.083899  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:30:44.121743  205913 cri.go:89] found id: ""
	I0408 19:30:44.121778  205913 logs.go:282] 0 containers: []
	W0408 19:30:44.121790  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:30:44.121799  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:30:44.121904  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:30:44.161507  205913 cri.go:89] found id: ""
	I0408 19:30:44.161541  205913 logs.go:282] 0 containers: []
	W0408 19:30:44.161552  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:30:44.161561  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:30:44.161627  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:30:44.197122  205913 cri.go:89] found id: ""
	I0408 19:30:44.197150  205913 logs.go:282] 0 containers: []
	W0408 19:30:44.197162  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:30:44.197171  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:30:44.197293  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:30:44.234786  205913 cri.go:89] found id: ""
	I0408 19:30:44.234815  205913 logs.go:282] 0 containers: []
	W0408 19:30:44.234823  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:30:44.234830  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:30:44.234884  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:30:44.270279  205913 cri.go:89] found id: ""
	I0408 19:30:44.270316  205913 logs.go:282] 0 containers: []
	W0408 19:30:44.270328  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:30:44.270341  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:30:44.270356  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:30:44.325498  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:30:44.325551  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:30:44.340284  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:30:44.340329  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:30:44.414066  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:30:44.414089  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:30:44.414105  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:30:44.494907  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:30:44.494954  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:30:47.037776  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:47.051909  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:30:47.051988  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:30:47.091895  205913 cri.go:89] found id: ""
	I0408 19:30:47.091920  205913 logs.go:282] 0 containers: []
	W0408 19:30:47.091928  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:30:47.091934  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:30:47.092002  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:30:47.135818  205913 cri.go:89] found id: ""
	I0408 19:30:47.135844  205913 logs.go:282] 0 containers: []
	W0408 19:30:47.135852  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:30:47.135859  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:30:47.135911  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:30:47.172620  205913 cri.go:89] found id: ""
	I0408 19:30:47.172650  205913 logs.go:282] 0 containers: []
	W0408 19:30:47.172662  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:30:47.172669  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:30:47.172735  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:30:47.212850  205913 cri.go:89] found id: ""
	I0408 19:30:47.212886  205913 logs.go:282] 0 containers: []
	W0408 19:30:47.212897  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:30:47.212905  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:30:47.212991  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:30:47.248194  205913 cri.go:89] found id: ""
	I0408 19:30:47.248225  205913 logs.go:282] 0 containers: []
	W0408 19:30:47.248235  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:30:47.248251  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:30:47.248313  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:30:47.288860  205913 cri.go:89] found id: ""
	I0408 19:30:47.288896  205913 logs.go:282] 0 containers: []
	W0408 19:30:47.288908  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:30:47.288917  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:30:47.288983  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:30:47.325807  205913 cri.go:89] found id: ""
	I0408 19:30:47.325859  205913 logs.go:282] 0 containers: []
	W0408 19:30:47.325871  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:30:47.325879  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:30:47.325942  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:30:47.362737  205913 cri.go:89] found id: ""
	I0408 19:30:47.362770  205913 logs.go:282] 0 containers: []
	W0408 19:30:47.362778  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:30:47.362788  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:30:47.362802  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:30:47.433697  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:30:47.433733  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:30:47.433754  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:30:47.515186  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:30:47.515230  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:30:47.558619  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:30:47.558648  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:30:47.610696  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:30:47.610752  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:30:50.126020  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:50.140733  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:30:50.140817  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:30:50.175067  205913 cri.go:89] found id: ""
	I0408 19:30:50.175105  205913 logs.go:282] 0 containers: []
	W0408 19:30:50.175117  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:30:50.175125  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:30:50.175192  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:30:50.213440  205913 cri.go:89] found id: ""
	I0408 19:30:50.213468  205913 logs.go:282] 0 containers: []
	W0408 19:30:50.213477  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:30:50.213485  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:30:50.213552  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:30:50.249267  205913 cri.go:89] found id: ""
	I0408 19:30:50.249298  205913 logs.go:282] 0 containers: []
	W0408 19:30:50.249306  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:30:50.249313  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:30:50.249366  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:30:50.283583  205913 cri.go:89] found id: ""
	I0408 19:30:50.283613  205913 logs.go:282] 0 containers: []
	W0408 19:30:50.283622  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:30:50.283629  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:30:50.283688  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:30:50.318858  205913 cri.go:89] found id: ""
	I0408 19:30:50.318892  205913 logs.go:282] 0 containers: []
	W0408 19:30:50.318900  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:30:50.318907  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:30:50.318965  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:30:50.352478  205913 cri.go:89] found id: ""
	I0408 19:30:50.352511  205913 logs.go:282] 0 containers: []
	W0408 19:30:50.352524  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:30:50.352533  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:30:50.352599  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:30:50.386649  205913 cri.go:89] found id: ""
	I0408 19:30:50.386683  205913 logs.go:282] 0 containers: []
	W0408 19:30:50.386692  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:30:50.386698  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:30:50.386752  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:30:50.421507  205913 cri.go:89] found id: ""
	I0408 19:30:50.421533  205913 logs.go:282] 0 containers: []
	W0408 19:30:50.421540  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:30:50.421550  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:30:50.421562  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:30:50.477207  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:30:50.477248  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:30:50.491376  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:30:50.491412  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:30:50.568495  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:30:50.568528  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:30:50.568546  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:30:50.650079  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:30:50.650123  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:30:53.193043  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:53.206710  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:30:53.206774  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:30:53.242490  205913 cri.go:89] found id: ""
	I0408 19:30:53.242527  205913 logs.go:282] 0 containers: []
	W0408 19:30:53.242540  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:30:53.242549  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:30:53.242623  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:30:53.275798  205913 cri.go:89] found id: ""
	I0408 19:30:53.275823  205913 logs.go:282] 0 containers: []
	W0408 19:30:53.275831  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:30:53.275838  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:30:53.275893  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:30:53.310469  205913 cri.go:89] found id: ""
	I0408 19:30:53.310535  205913 logs.go:282] 0 containers: []
	W0408 19:30:53.310549  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:30:53.310556  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:30:53.310610  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:30:53.344060  205913 cri.go:89] found id: ""
	I0408 19:30:53.344098  205913 logs.go:282] 0 containers: []
	W0408 19:30:53.344111  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:30:53.344118  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:30:53.344174  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:30:53.380060  205913 cri.go:89] found id: ""
	I0408 19:30:53.380089  205913 logs.go:282] 0 containers: []
	W0408 19:30:53.380097  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:30:53.380106  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:30:53.380163  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:30:53.413766  205913 cri.go:89] found id: ""
	I0408 19:30:53.413790  205913 logs.go:282] 0 containers: []
	W0408 19:30:53.413798  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:30:53.413804  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:30:53.413895  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:30:53.449649  205913 cri.go:89] found id: ""
	I0408 19:30:53.449675  205913 logs.go:282] 0 containers: []
	W0408 19:30:53.449686  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:30:53.449692  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:30:53.449753  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:30:53.482637  205913 cri.go:89] found id: ""
	I0408 19:30:53.482668  205913 logs.go:282] 0 containers: []
	W0408 19:30:53.482676  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:30:53.482686  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:30:53.482700  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:30:53.563364  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:30:53.563405  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:30:53.602456  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:30:53.602484  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:30:53.652369  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:30:53.652408  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:30:53.665762  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:30:53.665795  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:30:53.744053  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:30:56.245072  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:56.258082  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:30:56.258165  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:30:56.291686  205913 cri.go:89] found id: ""
	I0408 19:30:56.291721  205913 logs.go:282] 0 containers: []
	W0408 19:30:56.291733  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:30:56.291742  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:30:56.291809  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:30:56.327306  205913 cri.go:89] found id: ""
	I0408 19:30:56.327340  205913 logs.go:282] 0 containers: []
	W0408 19:30:56.327353  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:30:56.327362  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:30:56.327431  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:30:56.361197  205913 cri.go:89] found id: ""
	I0408 19:30:56.361228  205913 logs.go:282] 0 containers: []
	W0408 19:30:56.361237  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:30:56.361243  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:30:56.361312  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:30:56.397222  205913 cri.go:89] found id: ""
	I0408 19:30:56.397250  205913 logs.go:282] 0 containers: []
	W0408 19:30:56.397260  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:30:56.397266  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:30:56.397321  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:30:56.431668  205913 cri.go:89] found id: ""
	I0408 19:30:56.431699  205913 logs.go:282] 0 containers: []
	W0408 19:30:56.431711  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:30:56.431719  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:30:56.431788  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:30:56.470507  205913 cri.go:89] found id: ""
	I0408 19:30:56.470541  205913 logs.go:282] 0 containers: []
	W0408 19:30:56.470552  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:30:56.470561  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:30:56.470631  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:30:56.505937  205913 cri.go:89] found id: ""
	I0408 19:30:56.505964  205913 logs.go:282] 0 containers: []
	W0408 19:30:56.505972  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:30:56.505979  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:30:56.506032  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:30:56.542176  205913 cri.go:89] found id: ""
	I0408 19:30:56.542204  205913 logs.go:282] 0 containers: []
	W0408 19:30:56.542213  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:30:56.542234  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:30:56.542248  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:30:56.593297  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:30:56.593341  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:30:56.607243  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:30:56.607276  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:30:56.680603  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:30:56.680624  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:30:56.680636  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:30:56.760897  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:30:56.760941  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:30:59.301787  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:30:59.316603  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:30:59.316672  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:30:59.352883  205913 cri.go:89] found id: ""
	I0408 19:30:59.352916  205913 logs.go:282] 0 containers: []
	W0408 19:30:59.352924  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:30:59.352930  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:30:59.352994  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:30:59.388730  205913 cri.go:89] found id: ""
	I0408 19:30:59.388757  205913 logs.go:282] 0 containers: []
	W0408 19:30:59.388765  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:30:59.388771  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:30:59.388831  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:30:59.422789  205913 cri.go:89] found id: ""
	I0408 19:30:59.422822  205913 logs.go:282] 0 containers: []
	W0408 19:30:59.422830  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:30:59.422837  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:30:59.422893  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:30:59.457875  205913 cri.go:89] found id: ""
	I0408 19:30:59.457915  205913 logs.go:282] 0 containers: []
	W0408 19:30:59.457924  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:30:59.457931  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:30:59.458004  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:30:59.493363  205913 cri.go:89] found id: ""
	I0408 19:30:59.493396  205913 logs.go:282] 0 containers: []
	W0408 19:30:59.493408  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:30:59.493417  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:30:59.493480  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:30:59.527445  205913 cri.go:89] found id: ""
	I0408 19:30:59.527479  205913 logs.go:282] 0 containers: []
	W0408 19:30:59.527490  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:30:59.527498  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:30:59.527570  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:30:59.561757  205913 cri.go:89] found id: ""
	I0408 19:30:59.561793  205913 logs.go:282] 0 containers: []
	W0408 19:30:59.561804  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:30:59.561812  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:30:59.561905  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:30:59.594270  205913 cri.go:89] found id: ""
	I0408 19:30:59.594302  205913 logs.go:282] 0 containers: []
	W0408 19:30:59.594313  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:30:59.594326  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:30:59.594343  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:30:59.609989  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:30:59.610026  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:30:59.691303  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:30:59.691334  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:30:59.691351  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:30:59.771023  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:30:59.771070  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:30:59.813601  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:30:59.813628  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:02.366982  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:02.380278  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:02.380342  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:02.415829  205913 cri.go:89] found id: ""
	I0408 19:31:02.415858  205913 logs.go:282] 0 containers: []
	W0408 19:31:02.415866  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:02.415873  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:02.415925  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:02.453492  205913 cri.go:89] found id: ""
	I0408 19:31:02.453520  205913 logs.go:282] 0 containers: []
	W0408 19:31:02.453532  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:02.453541  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:02.453610  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:02.488533  205913 cri.go:89] found id: ""
	I0408 19:31:02.488565  205913 logs.go:282] 0 containers: []
	W0408 19:31:02.488578  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:02.488586  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:02.488643  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:02.523128  205913 cri.go:89] found id: ""
	I0408 19:31:02.523164  205913 logs.go:282] 0 containers: []
	W0408 19:31:02.523176  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:02.523185  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:02.523252  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:02.561911  205913 cri.go:89] found id: ""
	I0408 19:31:02.561939  205913 logs.go:282] 0 containers: []
	W0408 19:31:02.561951  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:02.561960  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:02.562045  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:02.599824  205913 cri.go:89] found id: ""
	I0408 19:31:02.599851  205913 logs.go:282] 0 containers: []
	W0408 19:31:02.599859  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:02.599866  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:02.599919  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:02.638433  205913 cri.go:89] found id: ""
	I0408 19:31:02.638464  205913 logs.go:282] 0 containers: []
	W0408 19:31:02.638475  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:02.638483  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:02.638551  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:02.675241  205913 cri.go:89] found id: ""
	I0408 19:31:02.675273  205913 logs.go:282] 0 containers: []
	W0408 19:31:02.675282  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:02.675292  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:02.675304  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:02.759335  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:02.759384  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:02.801959  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:02.801992  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:02.851877  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:02.851920  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:02.866741  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:02.866784  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:02.941599  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:05.442003  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:05.457802  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:05.457945  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:05.495335  205913 cri.go:89] found id: ""
	I0408 19:31:05.495366  205913 logs.go:282] 0 containers: []
	W0408 19:31:05.495376  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:05.495388  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:05.495450  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:05.529408  205913 cri.go:89] found id: ""
	I0408 19:31:05.529441  205913 logs.go:282] 0 containers: []
	W0408 19:31:05.529453  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:05.529461  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:05.529528  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:05.571383  205913 cri.go:89] found id: ""
	I0408 19:31:05.571412  205913 logs.go:282] 0 containers: []
	W0408 19:31:05.571421  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:05.571426  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:05.571486  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:05.610082  205913 cri.go:89] found id: ""
	I0408 19:31:05.610118  205913 logs.go:282] 0 containers: []
	W0408 19:31:05.610129  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:05.610138  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:05.610218  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:05.659425  205913 cri.go:89] found id: ""
	I0408 19:31:05.659454  205913 logs.go:282] 0 containers: []
	W0408 19:31:05.659461  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:05.659468  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:05.659529  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:05.700935  205913 cri.go:89] found id: ""
	I0408 19:31:05.700964  205913 logs.go:282] 0 containers: []
	W0408 19:31:05.700972  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:05.700979  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:05.701055  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:05.739503  205913 cri.go:89] found id: ""
	I0408 19:31:05.739527  205913 logs.go:282] 0 containers: []
	W0408 19:31:05.739535  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:05.739541  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:05.739597  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:05.774939  205913 cri.go:89] found id: ""
	I0408 19:31:05.774970  205913 logs.go:282] 0 containers: []
	W0408 19:31:05.774982  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:05.774995  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:05.775009  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:05.825704  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:05.825750  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:05.841165  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:05.841201  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:05.910751  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:05.910779  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:05.910798  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:05.993774  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:05.993819  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:08.535774  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:08.549312  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:08.549390  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:08.586113  205913 cri.go:89] found id: ""
	I0408 19:31:08.586141  205913 logs.go:282] 0 containers: []
	W0408 19:31:08.586152  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:08.586162  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:08.586232  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:08.620163  205913 cri.go:89] found id: ""
	I0408 19:31:08.620196  205913 logs.go:282] 0 containers: []
	W0408 19:31:08.620209  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:08.620217  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:08.620293  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:08.653129  205913 cri.go:89] found id: ""
	I0408 19:31:08.653160  205913 logs.go:282] 0 containers: []
	W0408 19:31:08.653171  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:08.653178  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:08.653356  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:08.688566  205913 cri.go:89] found id: ""
	I0408 19:31:08.688597  205913 logs.go:282] 0 containers: []
	W0408 19:31:08.688606  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:08.688612  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:08.688665  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:08.730671  205913 cri.go:89] found id: ""
	I0408 19:31:08.730721  205913 logs.go:282] 0 containers: []
	W0408 19:31:08.730734  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:08.730742  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:08.730821  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:08.764066  205913 cri.go:89] found id: ""
	I0408 19:31:08.764102  205913 logs.go:282] 0 containers: []
	W0408 19:31:08.764114  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:08.764124  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:08.764198  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:08.798849  205913 cri.go:89] found id: ""
	I0408 19:31:08.798883  205913 logs.go:282] 0 containers: []
	W0408 19:31:08.798894  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:08.798902  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:08.798965  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:08.834089  205913 cri.go:89] found id: ""
	I0408 19:31:08.834117  205913 logs.go:282] 0 containers: []
	W0408 19:31:08.834124  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:08.834134  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:08.834146  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:08.921183  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:08.921231  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:08.960567  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:08.960598  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:09.012422  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:09.012466  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:09.029919  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:09.029953  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:09.099281  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:11.599879  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:11.613079  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:11.613154  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:11.655045  205913 cri.go:89] found id: ""
	I0408 19:31:11.655083  205913 logs.go:282] 0 containers: []
	W0408 19:31:11.655095  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:11.655104  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:11.655184  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:11.695121  205913 cri.go:89] found id: ""
	I0408 19:31:11.695153  205913 logs.go:282] 0 containers: []
	W0408 19:31:11.695164  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:11.695174  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:11.695226  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:11.729683  205913 cri.go:89] found id: ""
	I0408 19:31:11.729714  205913 logs.go:282] 0 containers: []
	W0408 19:31:11.729722  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:11.729728  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:11.729788  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:11.764664  205913 cri.go:89] found id: ""
	I0408 19:31:11.764697  205913 logs.go:282] 0 containers: []
	W0408 19:31:11.764705  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:11.764712  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:11.764784  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:11.802694  205913 cri.go:89] found id: ""
	I0408 19:31:11.802725  205913 logs.go:282] 0 containers: []
	W0408 19:31:11.802736  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:11.802745  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:11.802822  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:11.838073  205913 cri.go:89] found id: ""
	I0408 19:31:11.838102  205913 logs.go:282] 0 containers: []
	W0408 19:31:11.838111  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:11.838118  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:11.838174  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:11.872764  205913 cri.go:89] found id: ""
	I0408 19:31:11.872795  205913 logs.go:282] 0 containers: []
	W0408 19:31:11.872803  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:11.872810  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:11.872867  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:11.911392  205913 cri.go:89] found id: ""
	I0408 19:31:11.911423  205913 logs.go:282] 0 containers: []
	W0408 19:31:11.911432  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:11.911444  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:11.911457  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:11.925365  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:11.925398  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:12.001586  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:12.001618  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:12.001637  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:12.096092  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:12.096146  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:12.158267  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:12.158305  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:14.710367  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:14.723576  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:14.723662  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:14.758321  205913 cri.go:89] found id: ""
	I0408 19:31:14.758358  205913 logs.go:282] 0 containers: []
	W0408 19:31:14.758371  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:14.758381  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:14.758455  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:14.792822  205913 cri.go:89] found id: ""
	I0408 19:31:14.792850  205913 logs.go:282] 0 containers: []
	W0408 19:31:14.792858  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:14.792864  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:14.792945  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:14.827723  205913 cri.go:89] found id: ""
	I0408 19:31:14.827752  205913 logs.go:282] 0 containers: []
	W0408 19:31:14.827760  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:14.827766  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:14.827819  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:14.863482  205913 cri.go:89] found id: ""
	I0408 19:31:14.863508  205913 logs.go:282] 0 containers: []
	W0408 19:31:14.863516  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:14.863522  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:14.863573  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:14.898723  205913 cri.go:89] found id: ""
	I0408 19:31:14.898759  205913 logs.go:282] 0 containers: []
	W0408 19:31:14.898770  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:14.898778  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:14.898856  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:14.931165  205913 cri.go:89] found id: ""
	I0408 19:31:14.931193  205913 logs.go:282] 0 containers: []
	W0408 19:31:14.931201  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:14.931208  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:14.931278  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:14.965598  205913 cri.go:89] found id: ""
	I0408 19:31:14.965625  205913 logs.go:282] 0 containers: []
	W0408 19:31:14.965638  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:14.965647  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:14.965717  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:14.999489  205913 cri.go:89] found id: ""
	I0408 19:31:14.999513  205913 logs.go:282] 0 containers: []
	W0408 19:31:14.999521  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:14.999530  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:14.999542  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:15.076618  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:15.076670  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:15.124444  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:15.124490  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:15.178461  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:15.178504  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:15.193101  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:15.193140  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:15.267604  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:17.768511  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:17.787950  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:17.788026  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:17.830364  205913 cri.go:89] found id: ""
	I0408 19:31:17.830393  205913 logs.go:282] 0 containers: []
	W0408 19:31:17.830402  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:17.830418  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:17.830479  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:17.880310  205913 cri.go:89] found id: ""
	I0408 19:31:17.880347  205913 logs.go:282] 0 containers: []
	W0408 19:31:17.880360  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:17.880369  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:17.880433  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:17.932463  205913 cri.go:89] found id: ""
	I0408 19:31:17.932494  205913 logs.go:282] 0 containers: []
	W0408 19:31:17.932508  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:17.932516  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:17.932583  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:17.968576  205913 cri.go:89] found id: ""
	I0408 19:31:17.968605  205913 logs.go:282] 0 containers: []
	W0408 19:31:17.968613  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:17.968619  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:17.968675  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:18.002543  205913 cri.go:89] found id: ""
	I0408 19:31:18.002572  205913 logs.go:282] 0 containers: []
	W0408 19:31:18.002580  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:18.002586  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:18.002638  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:18.038427  205913 cri.go:89] found id: ""
	I0408 19:31:18.038465  205913 logs.go:282] 0 containers: []
	W0408 19:31:18.038477  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:18.038486  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:18.038557  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:18.074615  205913 cri.go:89] found id: ""
	I0408 19:31:18.074652  205913 logs.go:282] 0 containers: []
	W0408 19:31:18.074664  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:18.074673  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:18.074745  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:18.114371  205913 cri.go:89] found id: ""
	I0408 19:31:18.114398  205913 logs.go:282] 0 containers: []
	W0408 19:31:18.114409  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:18.114420  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:18.114435  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:18.193068  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:18.193115  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:18.233175  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:18.233204  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:18.286111  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:18.286154  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:18.302000  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:18.302035  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:18.380302  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:20.881303  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:20.895033  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:20.895123  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:20.930373  205913 cri.go:89] found id: ""
	I0408 19:31:20.930397  205913 logs.go:282] 0 containers: []
	W0408 19:31:20.930405  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:20.930411  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:20.930475  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:20.965334  205913 cri.go:89] found id: ""
	I0408 19:31:20.965366  205913 logs.go:282] 0 containers: []
	W0408 19:31:20.965374  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:20.965381  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:20.965433  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:21.001441  205913 cri.go:89] found id: ""
	I0408 19:31:21.001468  205913 logs.go:282] 0 containers: []
	W0408 19:31:21.001476  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:21.001483  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:21.001573  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:21.036396  205913 cri.go:89] found id: ""
	I0408 19:31:21.036422  205913 logs.go:282] 0 containers: []
	W0408 19:31:21.036431  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:21.036437  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:21.036493  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:21.070072  205913 cri.go:89] found id: ""
	I0408 19:31:21.070106  205913 logs.go:282] 0 containers: []
	W0408 19:31:21.070118  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:21.070127  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:21.070194  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:21.106345  205913 cri.go:89] found id: ""
	I0408 19:31:21.106375  205913 logs.go:282] 0 containers: []
	W0408 19:31:21.106383  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:21.106390  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:21.106455  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:21.146045  205913 cri.go:89] found id: ""
	I0408 19:31:21.146074  205913 logs.go:282] 0 containers: []
	W0408 19:31:21.146082  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:21.146088  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:21.146145  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:21.181255  205913 cri.go:89] found id: ""
	I0408 19:31:21.181290  205913 logs.go:282] 0 containers: []
	W0408 19:31:21.181302  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:21.181314  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:21.181332  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:21.233224  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:21.233271  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:21.247472  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:21.247504  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:21.318622  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:21.318652  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:21.318668  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:21.394951  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:21.395009  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:23.934880  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:23.949719  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:23.949792  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:23.986226  205913 cri.go:89] found id: ""
	I0408 19:31:23.986272  205913 logs.go:282] 0 containers: []
	W0408 19:31:23.986281  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:23.986287  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:23.986345  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:24.023487  205913 cri.go:89] found id: ""
	I0408 19:31:24.023514  205913 logs.go:282] 0 containers: []
	W0408 19:31:24.023522  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:24.023528  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:24.023582  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:24.056874  205913 cri.go:89] found id: ""
	I0408 19:31:24.056912  205913 logs.go:282] 0 containers: []
	W0408 19:31:24.056924  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:24.056933  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:24.057006  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:24.091799  205913 cri.go:89] found id: ""
	I0408 19:31:24.091829  205913 logs.go:282] 0 containers: []
	W0408 19:31:24.091842  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:24.091850  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:24.091907  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:24.136148  205913 cri.go:89] found id: ""
	I0408 19:31:24.136178  205913 logs.go:282] 0 containers: []
	W0408 19:31:24.136189  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:24.136199  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:24.136281  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:24.172446  205913 cri.go:89] found id: ""
	I0408 19:31:24.172475  205913 logs.go:282] 0 containers: []
	W0408 19:31:24.172483  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:24.172490  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:24.172549  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:24.217463  205913 cri.go:89] found id: ""
	I0408 19:31:24.217490  205913 logs.go:282] 0 containers: []
	W0408 19:31:24.217499  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:24.217505  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:24.217572  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:24.253892  205913 cri.go:89] found id: ""
	I0408 19:31:24.253920  205913 logs.go:282] 0 containers: []
	W0408 19:31:24.253928  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:24.253939  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:24.253953  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:24.307188  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:24.307250  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:24.321942  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:24.321980  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:24.397677  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:24.397702  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:24.397716  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:24.479671  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:24.479707  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:27.022291  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:27.035575  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:27.035673  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:27.070013  205913 cri.go:89] found id: ""
	I0408 19:31:27.070047  205913 logs.go:282] 0 containers: []
	W0408 19:31:27.070059  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:27.070067  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:27.070126  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:27.109529  205913 cri.go:89] found id: ""
	I0408 19:31:27.109563  205913 logs.go:282] 0 containers: []
	W0408 19:31:27.109576  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:27.109584  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:27.109659  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:27.146779  205913 cri.go:89] found id: ""
	I0408 19:31:27.146812  205913 logs.go:282] 0 containers: []
	W0408 19:31:27.146823  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:27.146830  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:27.146901  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:27.182724  205913 cri.go:89] found id: ""
	I0408 19:31:27.182764  205913 logs.go:282] 0 containers: []
	W0408 19:31:27.182775  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:27.182784  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:27.182855  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:27.225016  205913 cri.go:89] found id: ""
	I0408 19:31:27.225041  205913 logs.go:282] 0 containers: []
	W0408 19:31:27.225049  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:27.225055  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:27.225105  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:27.259528  205913 cri.go:89] found id: ""
	I0408 19:31:27.259555  205913 logs.go:282] 0 containers: []
	W0408 19:31:27.259563  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:27.259569  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:27.259643  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:27.297818  205913 cri.go:89] found id: ""
	I0408 19:31:27.297873  205913 logs.go:282] 0 containers: []
	W0408 19:31:27.297886  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:27.297894  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:27.297976  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:27.336073  205913 cri.go:89] found id: ""
	I0408 19:31:27.336107  205913 logs.go:282] 0 containers: []
	W0408 19:31:27.336116  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:27.336128  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:27.336140  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:27.392627  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:27.392670  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:27.406919  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:27.406961  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:27.483300  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:27.483327  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:27.483373  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:27.561264  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:27.561312  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:30.102704  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:30.116010  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:30.116104  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:30.153618  205913 cri.go:89] found id: ""
	I0408 19:31:30.153651  205913 logs.go:282] 0 containers: []
	W0408 19:31:30.153663  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:30.153671  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:30.153734  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:30.191291  205913 cri.go:89] found id: ""
	I0408 19:31:30.191343  205913 logs.go:282] 0 containers: []
	W0408 19:31:30.191357  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:30.191366  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:30.191438  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:30.227951  205913 cri.go:89] found id: ""
	I0408 19:31:30.227981  205913 logs.go:282] 0 containers: []
	W0408 19:31:30.227989  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:30.227995  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:30.228060  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:30.262524  205913 cri.go:89] found id: ""
	I0408 19:31:30.262553  205913 logs.go:282] 0 containers: []
	W0408 19:31:30.262562  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:30.262568  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:30.262630  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:30.300228  205913 cri.go:89] found id: ""
	I0408 19:31:30.300263  205913 logs.go:282] 0 containers: []
	W0408 19:31:30.300275  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:30.300284  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:30.300408  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:30.340856  205913 cri.go:89] found id: ""
	I0408 19:31:30.340888  205913 logs.go:282] 0 containers: []
	W0408 19:31:30.340900  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:30.340909  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:30.341003  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:30.376346  205913 cri.go:89] found id: ""
	I0408 19:31:30.376372  205913 logs.go:282] 0 containers: []
	W0408 19:31:30.376380  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:30.376386  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:30.376439  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:30.414625  205913 cri.go:89] found id: ""
	I0408 19:31:30.414655  205913 logs.go:282] 0 containers: []
	W0408 19:31:30.414666  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:30.414678  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:30.414697  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:30.492660  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:30.492685  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:30.492702  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:30.577410  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:30.577455  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:30.619423  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:30.619457  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:30.673125  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:30.673177  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:33.188675  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:33.203036  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:33.203120  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:33.240229  205913 cri.go:89] found id: ""
	I0408 19:31:33.240256  205913 logs.go:282] 0 containers: []
	W0408 19:31:33.240264  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:33.240270  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:33.240330  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:33.280805  205913 cri.go:89] found id: ""
	I0408 19:31:33.280834  205913 logs.go:282] 0 containers: []
	W0408 19:31:33.280843  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:33.280849  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:33.280908  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:33.315848  205913 cri.go:89] found id: ""
	I0408 19:31:33.315884  205913 logs.go:282] 0 containers: []
	W0408 19:31:33.315929  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:33.315943  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:33.316015  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:33.351690  205913 cri.go:89] found id: ""
	I0408 19:31:33.351721  205913 logs.go:282] 0 containers: []
	W0408 19:31:33.351730  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:33.351736  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:33.351799  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:33.387993  205913 cri.go:89] found id: ""
	I0408 19:31:33.388022  205913 logs.go:282] 0 containers: []
	W0408 19:31:33.388030  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:33.388036  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:33.388101  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:33.424301  205913 cri.go:89] found id: ""
	I0408 19:31:33.424331  205913 logs.go:282] 0 containers: []
	W0408 19:31:33.424339  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:33.424346  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:33.424403  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:33.461863  205913 cri.go:89] found id: ""
	I0408 19:31:33.461895  205913 logs.go:282] 0 containers: []
	W0408 19:31:33.461907  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:33.461916  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:33.461995  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:33.505061  205913 cri.go:89] found id: ""
	I0408 19:31:33.505093  205913 logs.go:282] 0 containers: []
	W0408 19:31:33.505106  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:33.505120  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:33.505136  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:33.560875  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:33.560922  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:33.575348  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:33.575394  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:33.645547  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:33.645576  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:33.645590  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:33.730179  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:33.730224  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:36.269071  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:36.283060  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:36.283132  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:36.319786  205913 cri.go:89] found id: ""
	I0408 19:31:36.319819  205913 logs.go:282] 0 containers: []
	W0408 19:31:36.319831  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:36.319840  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:36.319894  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:36.355197  205913 cri.go:89] found id: ""
	I0408 19:31:36.355234  205913 logs.go:282] 0 containers: []
	W0408 19:31:36.355247  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:36.355255  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:36.355313  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:36.391039  205913 cri.go:89] found id: ""
	I0408 19:31:36.391071  205913 logs.go:282] 0 containers: []
	W0408 19:31:36.391080  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:36.391087  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:36.391139  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:36.427964  205913 cri.go:89] found id: ""
	I0408 19:31:36.428036  205913 logs.go:282] 0 containers: []
	W0408 19:31:36.428056  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:36.428064  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:36.428145  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:36.477440  205913 cri.go:89] found id: ""
	I0408 19:31:36.477476  205913 logs.go:282] 0 containers: []
	W0408 19:31:36.477488  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:36.477497  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:36.477570  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:36.514202  205913 cri.go:89] found id: ""
	I0408 19:31:36.514265  205913 logs.go:282] 0 containers: []
	W0408 19:31:36.514278  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:36.514286  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:36.514362  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:36.553543  205913 cri.go:89] found id: ""
	I0408 19:31:36.553580  205913 logs.go:282] 0 containers: []
	W0408 19:31:36.553592  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:36.553602  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:36.553671  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:36.597084  205913 cri.go:89] found id: ""
	I0408 19:31:36.597117  205913 logs.go:282] 0 containers: []
	W0408 19:31:36.597126  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:36.597136  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:36.597153  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:36.613147  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:36.613179  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:36.690023  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:36.690054  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:36.690072  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:36.768260  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:36.768307  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:36.816883  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:36.816918  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:39.373386  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:39.386934  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:39.387016  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:39.421115  205913 cri.go:89] found id: ""
	I0408 19:31:39.421157  205913 logs.go:282] 0 containers: []
	W0408 19:31:39.421168  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:39.421177  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:39.421245  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:39.459160  205913 cri.go:89] found id: ""
	I0408 19:31:39.459200  205913 logs.go:282] 0 containers: []
	W0408 19:31:39.459215  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:39.459224  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:39.459330  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:39.497620  205913 cri.go:89] found id: ""
	I0408 19:31:39.497652  205913 logs.go:282] 0 containers: []
	W0408 19:31:39.497661  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:39.497668  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:39.497722  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:39.534692  205913 cri.go:89] found id: ""
	I0408 19:31:39.534725  205913 logs.go:282] 0 containers: []
	W0408 19:31:39.534737  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:39.534745  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:39.534817  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:39.570600  205913 cri.go:89] found id: ""
	I0408 19:31:39.570634  205913 logs.go:282] 0 containers: []
	W0408 19:31:39.570641  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:39.570648  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:39.570784  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:39.610208  205913 cri.go:89] found id: ""
	I0408 19:31:39.610238  205913 logs.go:282] 0 containers: []
	W0408 19:31:39.610249  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:39.610259  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:39.610317  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:39.650613  205913 cri.go:89] found id: ""
	I0408 19:31:39.650647  205913 logs.go:282] 0 containers: []
	W0408 19:31:39.650660  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:39.650669  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:39.650741  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:39.687611  205913 cri.go:89] found id: ""
	I0408 19:31:39.687650  205913 logs.go:282] 0 containers: []
	W0408 19:31:39.687664  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:39.687680  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:39.687696  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:39.749911  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:39.749957  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:39.764318  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:39.764354  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:39.849562  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:39.849596  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:39.849611  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:39.934224  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:39.934268  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:42.477978  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:42.491374  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:42.491451  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:42.527034  205913 cri.go:89] found id: ""
	I0408 19:31:42.527070  205913 logs.go:282] 0 containers: []
	W0408 19:31:42.527083  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:42.527091  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:42.527158  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:42.564633  205913 cri.go:89] found id: ""
	I0408 19:31:42.564671  205913 logs.go:282] 0 containers: []
	W0408 19:31:42.564682  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:42.564692  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:42.564769  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:42.599219  205913 cri.go:89] found id: ""
	I0408 19:31:42.599260  205913 logs.go:282] 0 containers: []
	W0408 19:31:42.599271  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:42.599282  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:42.599365  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:42.639479  205913 cri.go:89] found id: ""
	I0408 19:31:42.639508  205913 logs.go:282] 0 containers: []
	W0408 19:31:42.639518  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:42.639526  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:42.639579  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:42.673559  205913 cri.go:89] found id: ""
	I0408 19:31:42.673596  205913 logs.go:282] 0 containers: []
	W0408 19:31:42.673609  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:42.673617  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:42.673690  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:42.709367  205913 cri.go:89] found id: ""
	I0408 19:31:42.709405  205913 logs.go:282] 0 containers: []
	W0408 19:31:42.709417  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:42.709426  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:42.709484  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:42.747231  205913 cri.go:89] found id: ""
	I0408 19:31:42.747265  205913 logs.go:282] 0 containers: []
	W0408 19:31:42.747277  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:42.747285  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:42.747354  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:42.781143  205913 cri.go:89] found id: ""
	I0408 19:31:42.781198  205913 logs.go:282] 0 containers: []
	W0408 19:31:42.781210  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:42.781223  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:42.781238  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:42.819784  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:42.819813  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:42.871558  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:42.871601  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:42.886608  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:42.886661  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:42.962681  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:42.962711  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:42.962726  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:45.547108  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:45.561121  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:45.561194  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:45.597533  205913 cri.go:89] found id: ""
	I0408 19:31:45.597563  205913 logs.go:282] 0 containers: []
	W0408 19:31:45.597574  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:45.597581  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:45.597641  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:45.633981  205913 cri.go:89] found id: ""
	I0408 19:31:45.634013  205913 logs.go:282] 0 containers: []
	W0408 19:31:45.634025  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:45.634034  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:45.634099  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:45.669388  205913 cri.go:89] found id: ""
	I0408 19:31:45.669420  205913 logs.go:282] 0 containers: []
	W0408 19:31:45.669432  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:45.669442  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:45.669511  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:45.704172  205913 cri.go:89] found id: ""
	I0408 19:31:45.704200  205913 logs.go:282] 0 containers: []
	W0408 19:31:45.704208  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:45.704215  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:45.704272  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:45.743085  205913 cri.go:89] found id: ""
	I0408 19:31:45.743114  205913 logs.go:282] 0 containers: []
	W0408 19:31:45.743122  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:45.743128  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:45.743200  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:45.781216  205913 cri.go:89] found id: ""
	I0408 19:31:45.781253  205913 logs.go:282] 0 containers: []
	W0408 19:31:45.781264  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:45.781272  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:45.781341  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:45.816019  205913 cri.go:89] found id: ""
	I0408 19:31:45.816052  205913 logs.go:282] 0 containers: []
	W0408 19:31:45.816064  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:45.816072  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:45.816150  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:45.859060  205913 cri.go:89] found id: ""
	I0408 19:31:45.859094  205913 logs.go:282] 0 containers: []
	W0408 19:31:45.859105  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:45.859119  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:45.859134  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:45.912534  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:45.912579  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:45.929726  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:45.929764  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:46.018218  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:46.018241  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:46.018257  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:46.099159  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:46.099203  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:48.640579  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:48.653825  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:48.653916  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:48.691089  205913 cri.go:89] found id: ""
	I0408 19:31:48.691116  205913 logs.go:282] 0 containers: []
	W0408 19:31:48.691125  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:48.691131  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:48.691194  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:48.726342  205913 cri.go:89] found id: ""
	I0408 19:31:48.726397  205913 logs.go:282] 0 containers: []
	W0408 19:31:48.726411  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:48.726419  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:48.726486  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:48.764507  205913 cri.go:89] found id: ""
	I0408 19:31:48.764536  205913 logs.go:282] 0 containers: []
	W0408 19:31:48.764545  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:48.764552  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:48.764616  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:48.802636  205913 cri.go:89] found id: ""
	I0408 19:31:48.802673  205913 logs.go:282] 0 containers: []
	W0408 19:31:48.802686  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:48.802694  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:48.802761  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:48.840913  205913 cri.go:89] found id: ""
	I0408 19:31:48.840946  205913 logs.go:282] 0 containers: []
	W0408 19:31:48.840958  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:48.840966  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:48.841030  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:48.884665  205913 cri.go:89] found id: ""
	I0408 19:31:48.884692  205913 logs.go:282] 0 containers: []
	W0408 19:31:48.884702  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:48.884711  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:48.884781  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:48.925677  205913 cri.go:89] found id: ""
	I0408 19:31:48.925704  205913 logs.go:282] 0 containers: []
	W0408 19:31:48.925713  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:48.925719  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:48.925777  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:48.961403  205913 cri.go:89] found id: ""
	I0408 19:31:48.961431  205913 logs.go:282] 0 containers: []
	W0408 19:31:48.961440  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:48.961450  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:48.961462  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:49.045791  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:49.045852  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:49.091023  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:49.091056  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:49.143707  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:49.143755  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:49.158076  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:49.158112  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:49.234338  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:51.735262  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:51.747991  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:51.748076  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:51.782906  205913 cri.go:89] found id: ""
	I0408 19:31:51.782942  205913 logs.go:282] 0 containers: []
	W0408 19:31:51.782953  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:51.782962  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:51.783034  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:51.817547  205913 cri.go:89] found id: ""
	I0408 19:31:51.817576  205913 logs.go:282] 0 containers: []
	W0408 19:31:51.817584  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:51.817590  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:51.817651  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:51.852961  205913 cri.go:89] found id: ""
	I0408 19:31:51.853006  205913 logs.go:282] 0 containers: []
	W0408 19:31:51.853014  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:51.853021  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:51.853077  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:51.888343  205913 cri.go:89] found id: ""
	I0408 19:31:51.888370  205913 logs.go:282] 0 containers: []
	W0408 19:31:51.888378  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:51.888384  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:51.888448  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:51.923409  205913 cri.go:89] found id: ""
	I0408 19:31:51.923437  205913 logs.go:282] 0 containers: []
	W0408 19:31:51.923446  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:51.923452  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:51.923506  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:51.959225  205913 cri.go:89] found id: ""
	I0408 19:31:51.959256  205913 logs.go:282] 0 containers: []
	W0408 19:31:51.959268  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:51.959276  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:51.959340  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:51.994887  205913 cri.go:89] found id: ""
	I0408 19:31:51.994920  205913 logs.go:282] 0 containers: []
	W0408 19:31:51.994928  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:51.994935  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:51.994998  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:52.028527  205913 cri.go:89] found id: ""
	I0408 19:31:52.028551  205913 logs.go:282] 0 containers: []
	W0408 19:31:52.028562  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:52.028572  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:52.028583  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:52.079074  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:52.079124  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:52.094093  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:52.094126  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:52.165857  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:52.165884  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:52.165909  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:52.251411  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:52.251456  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:54.793085  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:54.806866  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:54.806935  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:54.841977  205913 cri.go:89] found id: ""
	I0408 19:31:54.842003  205913 logs.go:282] 0 containers: []
	W0408 19:31:54.842011  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:54.842070  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:54.842129  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:54.878244  205913 cri.go:89] found id: ""
	I0408 19:31:54.878273  205913 logs.go:282] 0 containers: []
	W0408 19:31:54.878281  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:54.878287  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:54.878355  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:54.912754  205913 cri.go:89] found id: ""
	I0408 19:31:54.912793  205913 logs.go:282] 0 containers: []
	W0408 19:31:54.912804  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:54.912812  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:54.912887  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:54.950777  205913 cri.go:89] found id: ""
	I0408 19:31:54.950809  205913 logs.go:282] 0 containers: []
	W0408 19:31:54.950819  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:54.950827  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:54.950893  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:54.987889  205913 cri.go:89] found id: ""
	I0408 19:31:54.987917  205913 logs.go:282] 0 containers: []
	W0408 19:31:54.987927  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:54.987937  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:54.988001  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:55.022917  205913 cri.go:89] found id: ""
	I0408 19:31:55.022948  205913 logs.go:282] 0 containers: []
	W0408 19:31:55.022958  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:55.022973  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:55.023044  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:55.060178  205913 cri.go:89] found id: ""
	I0408 19:31:55.060205  205913 logs.go:282] 0 containers: []
	W0408 19:31:55.060218  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:55.060226  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:55.060295  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:55.093384  205913 cri.go:89] found id: ""
	I0408 19:31:55.093443  205913 logs.go:282] 0 containers: []
	W0408 19:31:55.093458  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:55.093471  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:55.093492  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:31:55.136754  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:55.136789  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:55.191071  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:55.191122  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:55.206772  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:55.206824  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:55.277595  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:55.277628  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:55.277660  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:57.859316  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:31:57.872870  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:31:57.872947  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:31:57.911599  205913 cri.go:89] found id: ""
	I0408 19:31:57.911627  205913 logs.go:282] 0 containers: []
	W0408 19:31:57.911637  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:31:57.911644  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:31:57.911718  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:31:57.948188  205913 cri.go:89] found id: ""
	I0408 19:31:57.948218  205913 logs.go:282] 0 containers: []
	W0408 19:31:57.948229  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:31:57.948237  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:31:57.948307  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:31:57.983220  205913 cri.go:89] found id: ""
	I0408 19:31:57.983254  205913 logs.go:282] 0 containers: []
	W0408 19:31:57.983265  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:31:57.983276  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:31:57.983341  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:31:58.018102  205913 cri.go:89] found id: ""
	I0408 19:31:58.018133  205913 logs.go:282] 0 containers: []
	W0408 19:31:58.018143  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:31:58.018151  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:31:58.018221  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:31:58.059883  205913 cri.go:89] found id: ""
	I0408 19:31:58.059917  205913 logs.go:282] 0 containers: []
	W0408 19:31:58.059925  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:31:58.059932  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:31:58.059998  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:31:58.110007  205913 cri.go:89] found id: ""
	I0408 19:31:58.110044  205913 logs.go:282] 0 containers: []
	W0408 19:31:58.110053  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:31:58.110061  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:31:58.110132  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:31:58.163779  205913 cri.go:89] found id: ""
	I0408 19:31:58.163816  205913 logs.go:282] 0 containers: []
	W0408 19:31:58.163824  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:31:58.163830  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:31:58.163884  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:31:58.202357  205913 cri.go:89] found id: ""
	I0408 19:31:58.202391  205913 logs.go:282] 0 containers: []
	W0408 19:31:58.202402  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:31:58.202414  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:31:58.202430  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:31:58.255686  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:31:58.255733  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:31:58.269861  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:31:58.269902  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:31:58.345933  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:31:58.345979  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:31:58.346000  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:31:58.423427  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:31:58.423474  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:00.967092  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:00.981385  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:00.981469  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:01.017727  205913 cri.go:89] found id: ""
	I0408 19:32:01.017769  205913 logs.go:282] 0 containers: []
	W0408 19:32:01.017781  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:01.017798  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:01.017908  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:01.057163  205913 cri.go:89] found id: ""
	I0408 19:32:01.057196  205913 logs.go:282] 0 containers: []
	W0408 19:32:01.057204  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:01.057210  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:01.057276  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:01.093057  205913 cri.go:89] found id: ""
	I0408 19:32:01.093088  205913 logs.go:282] 0 containers: []
	W0408 19:32:01.093100  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:01.093109  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:01.093173  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:01.131547  205913 cri.go:89] found id: ""
	I0408 19:32:01.131575  205913 logs.go:282] 0 containers: []
	W0408 19:32:01.131586  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:01.131593  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:01.131668  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:01.165504  205913 cri.go:89] found id: ""
	I0408 19:32:01.165534  205913 logs.go:282] 0 containers: []
	W0408 19:32:01.165543  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:01.165550  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:01.165604  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:01.199546  205913 cri.go:89] found id: ""
	I0408 19:32:01.199582  205913 logs.go:282] 0 containers: []
	W0408 19:32:01.199592  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:01.199598  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:01.199663  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:01.236201  205913 cri.go:89] found id: ""
	I0408 19:32:01.236238  205913 logs.go:282] 0 containers: []
	W0408 19:32:01.236256  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:01.236265  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:01.236332  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:01.272693  205913 cri.go:89] found id: ""
	I0408 19:32:01.272722  205913 logs.go:282] 0 containers: []
	W0408 19:32:01.272730  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:01.272740  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:01.272752  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:01.351847  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:01.351903  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:01.389845  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:01.389882  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:01.441550  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:01.441595  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:01.456461  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:01.456509  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:01.534273  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:04.034591  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:04.050603  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:04.050674  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:04.089384  205913 cri.go:89] found id: ""
	I0408 19:32:04.089412  205913 logs.go:282] 0 containers: []
	W0408 19:32:04.089422  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:04.089429  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:04.089503  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:04.127527  205913 cri.go:89] found id: ""
	I0408 19:32:04.127558  205913 logs.go:282] 0 containers: []
	W0408 19:32:04.127569  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:04.127577  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:04.127642  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:04.164038  205913 cri.go:89] found id: ""
	I0408 19:32:04.164069  205913 logs.go:282] 0 containers: []
	W0408 19:32:04.164079  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:04.164087  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:04.164162  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:04.203667  205913 cri.go:89] found id: ""
	I0408 19:32:04.203693  205913 logs.go:282] 0 containers: []
	W0408 19:32:04.203701  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:04.203707  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:04.203759  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:04.241949  205913 cri.go:89] found id: ""
	I0408 19:32:04.241977  205913 logs.go:282] 0 containers: []
	W0408 19:32:04.241987  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:04.241993  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:04.242071  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:04.279376  205913 cri.go:89] found id: ""
	I0408 19:32:04.279410  205913 logs.go:282] 0 containers: []
	W0408 19:32:04.279422  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:04.279431  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:04.279499  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:04.320620  205913 cri.go:89] found id: ""
	I0408 19:32:04.320655  205913 logs.go:282] 0 containers: []
	W0408 19:32:04.320668  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:04.320677  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:04.320745  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:04.362554  205913 cri.go:89] found id: ""
	I0408 19:32:04.362590  205913 logs.go:282] 0 containers: []
	W0408 19:32:04.362604  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:04.362617  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:04.362638  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:04.420036  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:04.420081  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:04.434741  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:04.434770  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:04.509688  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:04.509716  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:04.509732  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:04.605374  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:04.605418  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:07.152545  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:07.165878  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:07.165980  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:07.198305  205913 cri.go:89] found id: ""
	I0408 19:32:07.198332  205913 logs.go:282] 0 containers: []
	W0408 19:32:07.198343  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:07.198352  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:07.198423  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:07.234817  205913 cri.go:89] found id: ""
	I0408 19:32:07.234843  205913 logs.go:282] 0 containers: []
	W0408 19:32:07.234851  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:07.234857  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:07.234934  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:07.271800  205913 cri.go:89] found id: ""
	I0408 19:32:07.271832  205913 logs.go:282] 0 containers: []
	W0408 19:32:07.271841  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:07.271848  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:07.271902  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:07.313905  205913 cri.go:89] found id: ""
	I0408 19:32:07.313940  205913 logs.go:282] 0 containers: []
	W0408 19:32:07.313948  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:07.313973  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:07.314042  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:07.352475  205913 cri.go:89] found id: ""
	I0408 19:32:07.352505  205913 logs.go:282] 0 containers: []
	W0408 19:32:07.352518  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:07.352525  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:07.352575  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:07.387837  205913 cri.go:89] found id: ""
	I0408 19:32:07.387911  205913 logs.go:282] 0 containers: []
	W0408 19:32:07.387955  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:07.387964  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:07.388024  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:07.422507  205913 cri.go:89] found id: ""
	I0408 19:32:07.422548  205913 logs.go:282] 0 containers: []
	W0408 19:32:07.422560  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:07.422569  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:07.422640  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:07.458845  205913 cri.go:89] found id: ""
	I0408 19:32:07.458871  205913 logs.go:282] 0 containers: []
	W0408 19:32:07.458880  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:07.458890  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:07.458905  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:07.498222  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:07.498263  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:07.556584  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:07.556626  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:07.572345  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:07.572386  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:07.662586  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:07.662611  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:07.662624  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:10.267801  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:10.282647  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:10.282832  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:10.319887  205913 cri.go:89] found id: ""
	I0408 19:32:10.319915  205913 logs.go:282] 0 containers: []
	W0408 19:32:10.319923  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:10.319929  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:10.319984  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:10.354815  205913 cri.go:89] found id: ""
	I0408 19:32:10.354845  205913 logs.go:282] 0 containers: []
	W0408 19:32:10.354857  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:10.354865  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:10.354934  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:10.391035  205913 cri.go:89] found id: ""
	I0408 19:32:10.391062  205913 logs.go:282] 0 containers: []
	W0408 19:32:10.391074  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:10.391084  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:10.391151  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:10.426724  205913 cri.go:89] found id: ""
	I0408 19:32:10.426751  205913 logs.go:282] 0 containers: []
	W0408 19:32:10.426762  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:10.426772  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:10.426846  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:10.463755  205913 cri.go:89] found id: ""
	I0408 19:32:10.463790  205913 logs.go:282] 0 containers: []
	W0408 19:32:10.463800  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:10.463807  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:10.463873  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:10.505205  205913 cri.go:89] found id: ""
	I0408 19:32:10.505237  205913 logs.go:282] 0 containers: []
	W0408 19:32:10.505245  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:10.505252  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:10.505307  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:10.542315  205913 cri.go:89] found id: ""
	I0408 19:32:10.542353  205913 logs.go:282] 0 containers: []
	W0408 19:32:10.542366  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:10.542374  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:10.542448  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:10.583813  205913 cri.go:89] found id: ""
	I0408 19:32:10.583850  205913 logs.go:282] 0 containers: []
	W0408 19:32:10.583862  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:10.583873  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:10.583887  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:10.665997  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:10.666021  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:10.666037  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:10.753812  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:10.753878  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:10.793759  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:10.793794  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:10.843608  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:10.843650  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:13.356859  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:13.371801  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:13.371896  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:13.410092  205913 cri.go:89] found id: ""
	I0408 19:32:13.410122  205913 logs.go:282] 0 containers: []
	W0408 19:32:13.410135  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:13.410144  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:13.410210  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:13.452976  205913 cri.go:89] found id: ""
	I0408 19:32:13.453015  205913 logs.go:282] 0 containers: []
	W0408 19:32:13.453028  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:13.453037  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:13.453106  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:13.493191  205913 cri.go:89] found id: ""
	I0408 19:32:13.493229  205913 logs.go:282] 0 containers: []
	W0408 19:32:13.493241  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:13.493264  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:13.493350  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:13.536361  205913 cri.go:89] found id: ""
	I0408 19:32:13.536395  205913 logs.go:282] 0 containers: []
	W0408 19:32:13.536403  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:13.536411  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:13.536474  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:13.572477  205913 cri.go:89] found id: ""
	I0408 19:32:13.572511  205913 logs.go:282] 0 containers: []
	W0408 19:32:13.572523  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:13.572532  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:13.572601  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:13.611484  205913 cri.go:89] found id: ""
	I0408 19:32:13.611511  205913 logs.go:282] 0 containers: []
	W0408 19:32:13.611535  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:13.611544  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:13.611625  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:13.650770  205913 cri.go:89] found id: ""
	I0408 19:32:13.650797  205913 logs.go:282] 0 containers: []
	W0408 19:32:13.650806  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:13.650811  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:13.650863  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:13.685601  205913 cri.go:89] found id: ""
	I0408 19:32:13.685634  205913 logs.go:282] 0 containers: []
	W0408 19:32:13.685650  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:13.685661  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:13.685673  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:13.724224  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:13.724253  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:13.773653  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:13.773695  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:13.787531  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:13.787576  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:13.863293  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:13.863314  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:13.863328  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:16.438567  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:16.451493  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:16.451558  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:16.486929  205913 cri.go:89] found id: ""
	I0408 19:32:16.486964  205913 logs.go:282] 0 containers: []
	W0408 19:32:16.486976  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:16.486986  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:16.487054  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:16.527005  205913 cri.go:89] found id: ""
	I0408 19:32:16.527045  205913 logs.go:282] 0 containers: []
	W0408 19:32:16.527053  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:16.527060  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:16.527126  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:16.563385  205913 cri.go:89] found id: ""
	I0408 19:32:16.563417  205913 logs.go:282] 0 containers: []
	W0408 19:32:16.563429  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:16.563437  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:16.563501  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:16.599032  205913 cri.go:89] found id: ""
	I0408 19:32:16.599076  205913 logs.go:282] 0 containers: []
	W0408 19:32:16.599114  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:16.599140  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:16.599242  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:16.638971  205913 cri.go:89] found id: ""
	I0408 19:32:16.639011  205913 logs.go:282] 0 containers: []
	W0408 19:32:16.639024  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:16.639034  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:16.639193  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:16.678685  205913 cri.go:89] found id: ""
	I0408 19:32:16.678712  205913 logs.go:282] 0 containers: []
	W0408 19:32:16.678721  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:16.678728  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:16.678797  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:16.716193  205913 cri.go:89] found id: ""
	I0408 19:32:16.716218  205913 logs.go:282] 0 containers: []
	W0408 19:32:16.716235  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:16.716241  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:16.716302  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:16.752015  205913 cri.go:89] found id: ""
	I0408 19:32:16.752042  205913 logs.go:282] 0 containers: []
	W0408 19:32:16.752055  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:16.752071  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:16.752082  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:16.830752  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:16.830797  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:16.871694  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:16.871737  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:16.920855  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:16.920897  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:16.935863  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:16.935910  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:17.005233  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:19.506016  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:19.519236  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:19.519322  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:19.554891  205913 cri.go:89] found id: ""
	I0408 19:32:19.554924  205913 logs.go:282] 0 containers: []
	W0408 19:32:19.554935  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:19.554944  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:19.555013  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:19.589948  205913 cri.go:89] found id: ""
	I0408 19:32:19.589990  205913 logs.go:282] 0 containers: []
	W0408 19:32:19.590002  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:19.590010  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:19.590082  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:19.625676  205913 cri.go:89] found id: ""
	I0408 19:32:19.625706  205913 logs.go:282] 0 containers: []
	W0408 19:32:19.625715  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:19.625721  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:19.625779  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:19.666880  205913 cri.go:89] found id: ""
	I0408 19:32:19.666915  205913 logs.go:282] 0 containers: []
	W0408 19:32:19.666927  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:19.666935  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:19.667026  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:19.702258  205913 cri.go:89] found id: ""
	I0408 19:32:19.702289  205913 logs.go:282] 0 containers: []
	W0408 19:32:19.702301  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:19.702309  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:19.702373  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:19.741590  205913 cri.go:89] found id: ""
	I0408 19:32:19.741626  205913 logs.go:282] 0 containers: []
	W0408 19:32:19.741637  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:19.741647  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:19.741846  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:19.778608  205913 cri.go:89] found id: ""
	I0408 19:32:19.778643  205913 logs.go:282] 0 containers: []
	W0408 19:32:19.778654  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:19.778661  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:19.778729  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:19.816107  205913 cri.go:89] found id: ""
	I0408 19:32:19.816139  205913 logs.go:282] 0 containers: []
	W0408 19:32:19.816160  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:19.816174  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:19.816191  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:19.855250  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:19.855278  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:19.907619  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:19.907665  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:19.923637  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:19.923671  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:19.993468  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:19.993492  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:19.993508  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:22.580966  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:22.594725  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:22.594908  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:22.634896  205913 cri.go:89] found id: ""
	I0408 19:32:22.634933  205913 logs.go:282] 0 containers: []
	W0408 19:32:22.634944  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:22.634953  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:22.635028  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:22.673751  205913 cri.go:89] found id: ""
	I0408 19:32:22.673779  205913 logs.go:282] 0 containers: []
	W0408 19:32:22.673791  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:22.673800  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:22.673886  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:22.708353  205913 cri.go:89] found id: ""
	I0408 19:32:22.708381  205913 logs.go:282] 0 containers: []
	W0408 19:32:22.708390  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:22.708397  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:22.708449  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:22.743503  205913 cri.go:89] found id: ""
	I0408 19:32:22.743534  205913 logs.go:282] 0 containers: []
	W0408 19:32:22.743542  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:22.743550  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:22.743619  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:22.777465  205913 cri.go:89] found id: ""
	I0408 19:32:22.777494  205913 logs.go:282] 0 containers: []
	W0408 19:32:22.777506  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:22.777515  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:22.777586  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:22.811844  205913 cri.go:89] found id: ""
	I0408 19:32:22.811874  205913 logs.go:282] 0 containers: []
	W0408 19:32:22.811884  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:22.811890  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:22.811956  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:22.848582  205913 cri.go:89] found id: ""
	I0408 19:32:22.848611  205913 logs.go:282] 0 containers: []
	W0408 19:32:22.848620  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:22.848626  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:22.848682  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:22.882051  205913 cri.go:89] found id: ""
	I0408 19:32:22.882081  205913 logs.go:282] 0 containers: []
	W0408 19:32:22.882093  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:22.882106  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:22.882120  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:22.964537  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:22.964585  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:23.006426  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:23.006460  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:23.060102  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:23.060151  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:23.074695  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:23.074740  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:23.142867  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:25.643309  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:25.656870  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:25.656949  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:25.691542  205913 cri.go:89] found id: ""
	I0408 19:32:25.691571  205913 logs.go:282] 0 containers: []
	W0408 19:32:25.691580  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:25.691587  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:25.691638  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:25.729759  205913 cri.go:89] found id: ""
	I0408 19:32:25.729788  205913 logs.go:282] 0 containers: []
	W0408 19:32:25.729796  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:25.729803  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:25.729890  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:25.765701  205913 cri.go:89] found id: ""
	I0408 19:32:25.765727  205913 logs.go:282] 0 containers: []
	W0408 19:32:25.765741  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:25.765746  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:25.765810  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:25.804381  205913 cri.go:89] found id: ""
	I0408 19:32:25.804414  205913 logs.go:282] 0 containers: []
	W0408 19:32:25.804426  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:25.804434  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:25.804502  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:25.841429  205913 cri.go:89] found id: ""
	I0408 19:32:25.841461  205913 logs.go:282] 0 containers: []
	W0408 19:32:25.841473  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:25.841482  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:25.841543  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:25.877044  205913 cri.go:89] found id: ""
	I0408 19:32:25.877074  205913 logs.go:282] 0 containers: []
	W0408 19:32:25.877086  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:25.877094  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:25.877160  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:25.923425  205913 cri.go:89] found id: ""
	I0408 19:32:25.923456  205913 logs.go:282] 0 containers: []
	W0408 19:32:25.923466  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:25.923472  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:25.923534  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:25.958259  205913 cri.go:89] found id: ""
	I0408 19:32:25.958284  205913 logs.go:282] 0 containers: []
	W0408 19:32:25.958293  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:25.958309  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:25.958323  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:26.013724  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:26.013770  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:26.031813  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:26.031860  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:26.107290  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:26.107312  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:26.107330  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:26.189667  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:26.189716  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:28.734023  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:28.747901  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:28.747997  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:28.783790  205913 cri.go:89] found id: ""
	I0408 19:32:28.783822  205913 logs.go:282] 0 containers: []
	W0408 19:32:28.783832  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:28.783841  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:28.783905  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:28.832594  205913 cri.go:89] found id: ""
	I0408 19:32:28.832624  205913 logs.go:282] 0 containers: []
	W0408 19:32:28.832637  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:28.832648  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:28.832715  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:28.873869  205913 cri.go:89] found id: ""
	I0408 19:32:28.873902  205913 logs.go:282] 0 containers: []
	W0408 19:32:28.873913  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:28.873921  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:28.873981  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:28.912832  205913 cri.go:89] found id: ""
	I0408 19:32:28.912859  205913 logs.go:282] 0 containers: []
	W0408 19:32:28.912870  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:28.912878  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:28.912929  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:28.951314  205913 cri.go:89] found id: ""
	I0408 19:32:28.951346  205913 logs.go:282] 0 containers: []
	W0408 19:32:28.951359  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:28.951367  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:28.951427  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:28.990482  205913 cri.go:89] found id: ""
	I0408 19:32:28.990517  205913 logs.go:282] 0 containers: []
	W0408 19:32:28.990527  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:28.990540  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:28.990603  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:29.028406  205913 cri.go:89] found id: ""
	I0408 19:32:29.028440  205913 logs.go:282] 0 containers: []
	W0408 19:32:29.028451  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:29.028459  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:29.028522  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:29.070211  205913 cri.go:89] found id: ""
	I0408 19:32:29.070238  205913 logs.go:282] 0 containers: []
	W0408 19:32:29.070256  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:29.070268  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:29.070288  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:29.144858  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:29.144885  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:29.144899  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:29.227591  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:29.227641  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:29.271220  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:29.271247  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:29.323111  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:29.323160  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:31.838328  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:31.852320  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:31.852385  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:31.896411  205913 cri.go:89] found id: ""
	I0408 19:32:31.896448  205913 logs.go:282] 0 containers: []
	W0408 19:32:31.896461  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:31.896472  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:31.896533  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:31.948289  205913 cri.go:89] found id: ""
	I0408 19:32:31.948324  205913 logs.go:282] 0 containers: []
	W0408 19:32:31.948338  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:31.948346  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:31.948404  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:31.986125  205913 cri.go:89] found id: ""
	I0408 19:32:31.986154  205913 logs.go:282] 0 containers: []
	W0408 19:32:31.986167  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:31.986181  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:31.986246  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:32.040627  205913 cri.go:89] found id: ""
	I0408 19:32:32.040663  205913 logs.go:282] 0 containers: []
	W0408 19:32:32.040675  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:32.040683  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:32.040743  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:32.086944  205913 cri.go:89] found id: ""
	I0408 19:32:32.086976  205913 logs.go:282] 0 containers: []
	W0408 19:32:32.086987  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:32.086995  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:32.087073  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:32.122033  205913 cri.go:89] found id: ""
	I0408 19:32:32.122062  205913 logs.go:282] 0 containers: []
	W0408 19:32:32.122074  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:32.122083  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:32.122147  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:32.156749  205913 cri.go:89] found id: ""
	I0408 19:32:32.156779  205913 logs.go:282] 0 containers: []
	W0408 19:32:32.156790  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:32.156798  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:32.156871  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:32.201154  205913 cri.go:89] found id: ""
	I0408 19:32:32.201180  205913 logs.go:282] 0 containers: []
	W0408 19:32:32.201191  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:32.201203  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:32.201218  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:32.250457  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:32.250500  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:32.267239  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:32.267271  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:32.342876  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:32.342906  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:32.342922  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:32.426864  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:32.426911  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:34.973992  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:34.988126  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:34.988213  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:35.024342  205913 cri.go:89] found id: ""
	I0408 19:32:35.024371  205913 logs.go:282] 0 containers: []
	W0408 19:32:35.024382  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:35.024391  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:35.024451  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:35.059505  205913 cri.go:89] found id: ""
	I0408 19:32:35.059546  205913 logs.go:282] 0 containers: []
	W0408 19:32:35.059559  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:35.059567  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:35.059627  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:35.096442  205913 cri.go:89] found id: ""
	I0408 19:32:35.096470  205913 logs.go:282] 0 containers: []
	W0408 19:32:35.096478  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:35.096485  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:35.096538  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:35.130503  205913 cri.go:89] found id: ""
	I0408 19:32:35.130530  205913 logs.go:282] 0 containers: []
	W0408 19:32:35.130539  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:35.130545  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:35.130596  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:35.166847  205913 cri.go:89] found id: ""
	I0408 19:32:35.166882  205913 logs.go:282] 0 containers: []
	W0408 19:32:35.166894  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:35.166903  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:35.166985  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:35.203587  205913 cri.go:89] found id: ""
	I0408 19:32:35.203620  205913 logs.go:282] 0 containers: []
	W0408 19:32:35.203631  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:35.203639  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:35.203701  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:35.239438  205913 cri.go:89] found id: ""
	I0408 19:32:35.239465  205913 logs.go:282] 0 containers: []
	W0408 19:32:35.239474  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:35.239480  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:35.239537  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:35.274426  205913 cri.go:89] found id: ""
	I0408 19:32:35.274459  205913 logs.go:282] 0 containers: []
	W0408 19:32:35.274470  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:35.274482  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:35.274496  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:35.316577  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:35.316609  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:35.368098  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:35.368138  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:35.382549  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:35.382579  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:35.459800  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:35.459822  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:35.459837  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:38.039134  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:38.057120  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:38.057196  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:38.103500  205913 cri.go:89] found id: ""
	I0408 19:32:38.103531  205913 logs.go:282] 0 containers: []
	W0408 19:32:38.103544  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:38.103554  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:38.103620  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:38.146307  205913 cri.go:89] found id: ""
	I0408 19:32:38.146342  205913 logs.go:282] 0 containers: []
	W0408 19:32:38.146355  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:38.146363  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:38.146424  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:38.191307  205913 cri.go:89] found id: ""
	I0408 19:32:38.191336  205913 logs.go:282] 0 containers: []
	W0408 19:32:38.191345  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:38.191352  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:38.191420  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:38.236542  205913 cri.go:89] found id: ""
	I0408 19:32:38.236574  205913 logs.go:282] 0 containers: []
	W0408 19:32:38.236585  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:38.236593  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:38.236669  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:38.282786  205913 cri.go:89] found id: ""
	I0408 19:32:38.282815  205913 logs.go:282] 0 containers: []
	W0408 19:32:38.282826  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:38.282836  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:38.282901  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:38.331602  205913 cri.go:89] found id: ""
	I0408 19:32:38.331635  205913 logs.go:282] 0 containers: []
	W0408 19:32:38.331646  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:38.331656  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:38.331720  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:38.387877  205913 cri.go:89] found id: ""
	I0408 19:32:38.387916  205913 logs.go:282] 0 containers: []
	W0408 19:32:38.387928  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:38.387937  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:38.388010  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:38.439794  205913 cri.go:89] found id: ""
	I0408 19:32:38.439831  205913 logs.go:282] 0 containers: []
	W0408 19:32:38.439843  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:38.439857  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:38.439875  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:38.555640  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:38.555687  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:38.607306  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:38.607348  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:38.685465  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:38.685509  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:38.704804  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:38.704859  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:38.796095  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:41.296378  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:41.310116  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:41.310206  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:41.346459  205913 cri.go:89] found id: ""
	I0408 19:32:41.346492  205913 logs.go:282] 0 containers: []
	W0408 19:32:41.346505  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:41.346513  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:41.346581  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:41.384270  205913 cri.go:89] found id: ""
	I0408 19:32:41.384305  205913 logs.go:282] 0 containers: []
	W0408 19:32:41.384317  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:41.384326  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:41.384394  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:41.426345  205913 cri.go:89] found id: ""
	I0408 19:32:41.426375  205913 logs.go:282] 0 containers: []
	W0408 19:32:41.426387  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:41.426395  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:41.426463  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:41.466724  205913 cri.go:89] found id: ""
	I0408 19:32:41.466758  205913 logs.go:282] 0 containers: []
	W0408 19:32:41.466769  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:41.466778  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:41.466844  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:41.500847  205913 cri.go:89] found id: ""
	I0408 19:32:41.500882  205913 logs.go:282] 0 containers: []
	W0408 19:32:41.500893  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:41.500913  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:41.500993  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:41.540565  205913 cri.go:89] found id: ""
	I0408 19:32:41.540595  205913 logs.go:282] 0 containers: []
	W0408 19:32:41.540604  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:41.540611  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:41.540670  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:41.576891  205913 cri.go:89] found id: ""
	I0408 19:32:41.576943  205913 logs.go:282] 0 containers: []
	W0408 19:32:41.576955  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:41.576964  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:41.577043  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:41.620346  205913 cri.go:89] found id: ""
	I0408 19:32:41.620380  205913 logs.go:282] 0 containers: []
	W0408 19:32:41.620392  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:41.620405  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:41.620420  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:41.662071  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:41.662111  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:41.719663  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:41.719721  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:41.735811  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:41.735844  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:41.826562  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:41.826594  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:41.826611  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:44.421975  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:44.440778  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:44.440861  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:44.479093  205913 cri.go:89] found id: ""
	I0408 19:32:44.479127  205913 logs.go:282] 0 containers: []
	W0408 19:32:44.479140  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:44.479156  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:44.479231  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:44.523221  205913 cri.go:89] found id: ""
	I0408 19:32:44.523252  205913 logs.go:282] 0 containers: []
	W0408 19:32:44.523262  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:44.523270  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:44.523335  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:44.567801  205913 cri.go:89] found id: ""
	I0408 19:32:44.567826  205913 logs.go:282] 0 containers: []
	W0408 19:32:44.567834  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:44.567841  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:44.567892  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:44.608616  205913 cri.go:89] found id: ""
	I0408 19:32:44.608647  205913 logs.go:282] 0 containers: []
	W0408 19:32:44.608655  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:44.608661  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:44.608732  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:44.646614  205913 cri.go:89] found id: ""
	I0408 19:32:44.646652  205913 logs.go:282] 0 containers: []
	W0408 19:32:44.646667  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:44.646677  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:44.646747  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:44.690362  205913 cri.go:89] found id: ""
	I0408 19:32:44.690392  205913 logs.go:282] 0 containers: []
	W0408 19:32:44.690401  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:44.690409  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:44.690478  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:44.730545  205913 cri.go:89] found id: ""
	I0408 19:32:44.730585  205913 logs.go:282] 0 containers: []
	W0408 19:32:44.730594  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:44.730600  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:44.730665  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:44.766286  205913 cri.go:89] found id: ""
	I0408 19:32:44.766326  205913 logs.go:282] 0 containers: []
	W0408 19:32:44.766334  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:44.766344  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:44.766355  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:44.835640  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:44.835666  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:44.835679  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:44.921821  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:44.921881  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:44.966183  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:44.966220  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:45.023979  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:45.024028  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:47.540276  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:47.558494  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:47.558579  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:47.605680  205913 cri.go:89] found id: ""
	I0408 19:32:47.605719  205913 logs.go:282] 0 containers: []
	W0408 19:32:47.605731  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:47.605741  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:47.605811  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:47.645193  205913 cri.go:89] found id: ""
	I0408 19:32:47.645236  205913 logs.go:282] 0 containers: []
	W0408 19:32:47.645245  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:47.645251  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:47.645301  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:47.691014  205913 cri.go:89] found id: ""
	I0408 19:32:47.691043  205913 logs.go:282] 0 containers: []
	W0408 19:32:47.691053  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:47.691060  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:47.691137  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:47.736847  205913 cri.go:89] found id: ""
	I0408 19:32:47.736877  205913 logs.go:282] 0 containers: []
	W0408 19:32:47.736889  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:47.736897  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:47.736950  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:47.775480  205913 cri.go:89] found id: ""
	I0408 19:32:47.775510  205913 logs.go:282] 0 containers: []
	W0408 19:32:47.775523  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:47.775531  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:47.775597  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:47.815645  205913 cri.go:89] found id: ""
	I0408 19:32:47.815685  205913 logs.go:282] 0 containers: []
	W0408 19:32:47.815698  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:47.815706  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:47.815854  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:47.861433  205913 cri.go:89] found id: ""
	I0408 19:32:47.861468  205913 logs.go:282] 0 containers: []
	W0408 19:32:47.861480  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:47.861489  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:47.861559  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:47.899968  205913 cri.go:89] found id: ""
	I0408 19:32:47.900001  205913 logs.go:282] 0 containers: []
	W0408 19:32:47.900013  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:47.900025  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:47.900037  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:47.952190  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:47.952236  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:47.966565  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:47.966606  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:48.042762  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:48.042791  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:48.042807  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:48.130509  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:48.130565  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:50.679330  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:50.698111  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:50.698213  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:50.750318  205913 cri.go:89] found id: ""
	I0408 19:32:50.750347  205913 logs.go:282] 0 containers: []
	W0408 19:32:50.750356  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:50.750370  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:50.750425  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:50.797748  205913 cri.go:89] found id: ""
	I0408 19:32:50.797776  205913 logs.go:282] 0 containers: []
	W0408 19:32:50.797787  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:50.797796  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:50.797883  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:50.848630  205913 cri.go:89] found id: ""
	I0408 19:32:50.848659  205913 logs.go:282] 0 containers: []
	W0408 19:32:50.848668  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:50.848674  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:50.848742  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:50.893283  205913 cri.go:89] found id: ""
	I0408 19:32:50.893317  205913 logs.go:282] 0 containers: []
	W0408 19:32:50.893330  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:50.893338  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:50.893404  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:50.934057  205913 cri.go:89] found id: ""
	I0408 19:32:50.934091  205913 logs.go:282] 0 containers: []
	W0408 19:32:50.934102  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:50.934111  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:50.934175  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:50.975278  205913 cri.go:89] found id: ""
	I0408 19:32:50.975317  205913 logs.go:282] 0 containers: []
	W0408 19:32:50.975329  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:50.975337  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:50.975411  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:51.016371  205913 cri.go:89] found id: ""
	I0408 19:32:51.016402  205913 logs.go:282] 0 containers: []
	W0408 19:32:51.016414  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:51.016424  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:51.016490  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:51.052915  205913 cri.go:89] found id: ""
	I0408 19:32:51.052947  205913 logs.go:282] 0 containers: []
	W0408 19:32:51.052957  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:51.052968  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:51.052980  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:51.107876  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:51.107919  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:51.125417  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:51.125471  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:51.196547  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:51.196575  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:51.196592  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:51.287185  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:51.287225  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:53.838039  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:53.851409  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:53.851483  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:53.894586  205913 cri.go:89] found id: ""
	I0408 19:32:53.894623  205913 logs.go:282] 0 containers: []
	W0408 19:32:53.894636  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:53.894646  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:53.894718  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:53.934801  205913 cri.go:89] found id: ""
	I0408 19:32:53.934830  205913 logs.go:282] 0 containers: []
	W0408 19:32:53.934838  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:53.934845  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:53.934955  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:53.975178  205913 cri.go:89] found id: ""
	I0408 19:32:53.975215  205913 logs.go:282] 0 containers: []
	W0408 19:32:53.975235  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:53.975243  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:53.975316  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:54.016091  205913 cri.go:89] found id: ""
	I0408 19:32:54.016122  205913 logs.go:282] 0 containers: []
	W0408 19:32:54.016133  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:54.016141  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:54.016208  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:54.059146  205913 cri.go:89] found id: ""
	I0408 19:32:54.059175  205913 logs.go:282] 0 containers: []
	W0408 19:32:54.059188  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:54.059196  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:54.059269  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:54.105224  205913 cri.go:89] found id: ""
	I0408 19:32:54.105263  205913 logs.go:282] 0 containers: []
	W0408 19:32:54.105274  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:54.105283  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:54.105349  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:54.139248  205913 cri.go:89] found id: ""
	I0408 19:32:54.139279  205913 logs.go:282] 0 containers: []
	W0408 19:32:54.139287  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:54.139293  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:54.139343  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:54.176560  205913 cri.go:89] found id: ""
	I0408 19:32:54.176585  205913 logs.go:282] 0 containers: []
	W0408 19:32:54.176593  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:54.176603  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:54.176615  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:54.249563  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:54.249587  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:54.249599  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:54.348900  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:54.348949  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:32:54.403789  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:54.403827  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:54.468740  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:54.468784  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:56.983489  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:32:57.001725  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:32:57.001800  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:32:57.048848  205913 cri.go:89] found id: ""
	I0408 19:32:57.048885  205913 logs.go:282] 0 containers: []
	W0408 19:32:57.048943  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:32:57.048958  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:32:57.049034  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:32:57.087744  205913 cri.go:89] found id: ""
	I0408 19:32:57.087772  205913 logs.go:282] 0 containers: []
	W0408 19:32:57.087779  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:32:57.087786  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:32:57.087834  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:32:57.125861  205913 cri.go:89] found id: ""
	I0408 19:32:57.125897  205913 logs.go:282] 0 containers: []
	W0408 19:32:57.125907  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:32:57.125913  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:32:57.125965  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:32:57.164134  205913 cri.go:89] found id: ""
	I0408 19:32:57.164167  205913 logs.go:282] 0 containers: []
	W0408 19:32:57.164179  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:32:57.164187  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:32:57.164256  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:32:57.205681  205913 cri.go:89] found id: ""
	I0408 19:32:57.205719  205913 logs.go:282] 0 containers: []
	W0408 19:32:57.205730  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:32:57.205739  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:32:57.205812  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:32:57.248016  205913 cri.go:89] found id: ""
	I0408 19:32:57.248057  205913 logs.go:282] 0 containers: []
	W0408 19:32:57.248072  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:32:57.248082  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:32:57.248145  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:32:57.286838  205913 cri.go:89] found id: ""
	I0408 19:32:57.286871  205913 logs.go:282] 0 containers: []
	W0408 19:32:57.286884  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:32:57.286892  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:32:57.286962  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:32:57.326875  205913 cri.go:89] found id: ""
	I0408 19:32:57.326903  205913 logs.go:282] 0 containers: []
	W0408 19:32:57.326914  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:32:57.326928  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:32:57.326951  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:32:57.384328  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:32:57.384387  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:32:57.403830  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:32:57.403877  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:32:57.479019  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:32:57.479044  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:32:57.479061  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:32:57.593419  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:32:57.593472  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:33:00.141945  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:33:00.159721  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:33:00.159806  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:33:00.215394  205913 cri.go:89] found id: ""
	I0408 19:33:00.215421  205913 logs.go:282] 0 containers: []
	W0408 19:33:00.215433  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:33:00.215442  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:33:00.215499  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:33:00.270195  205913 cri.go:89] found id: ""
	I0408 19:33:00.270228  205913 logs.go:282] 0 containers: []
	W0408 19:33:00.270239  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:33:00.270247  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:33:00.270312  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:33:00.317221  205913 cri.go:89] found id: ""
	I0408 19:33:00.317263  205913 logs.go:282] 0 containers: []
	W0408 19:33:00.317278  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:33:00.317287  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:33:00.317355  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:33:00.370558  205913 cri.go:89] found id: ""
	I0408 19:33:00.370591  205913 logs.go:282] 0 containers: []
	W0408 19:33:00.370602  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:33:00.370611  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:33:00.370675  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:33:00.407347  205913 cri.go:89] found id: ""
	I0408 19:33:00.407387  205913 logs.go:282] 0 containers: []
	W0408 19:33:00.407399  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:33:00.407411  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:33:00.407484  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:33:00.448148  205913 cri.go:89] found id: ""
	I0408 19:33:00.448182  205913 logs.go:282] 0 containers: []
	W0408 19:33:00.448193  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:33:00.448202  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:33:00.448266  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:33:00.488587  205913 cri.go:89] found id: ""
	I0408 19:33:00.488619  205913 logs.go:282] 0 containers: []
	W0408 19:33:00.488632  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:33:00.488640  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:33:00.488700  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:33:00.547671  205913 cri.go:89] found id: ""
	I0408 19:33:00.547707  205913 logs.go:282] 0 containers: []
	W0408 19:33:00.547719  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:33:00.547733  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:33:00.547749  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:33:00.620720  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:33:00.620770  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:33:00.638000  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:33:00.638045  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:33:00.713506  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:33:00.713527  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:33:00.713540  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:33:00.808713  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:33:00.808761  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:33:03.355755  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:33:03.373269  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:33:03.373350  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:33:03.414083  205913 cri.go:89] found id: ""
	I0408 19:33:03.414116  205913 logs.go:282] 0 containers: []
	W0408 19:33:03.414128  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:33:03.414136  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:33:03.414218  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:33:03.456362  205913 cri.go:89] found id: ""
	I0408 19:33:03.456398  205913 logs.go:282] 0 containers: []
	W0408 19:33:03.456411  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:33:03.456423  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:33:03.456489  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:33:03.498386  205913 cri.go:89] found id: ""
	I0408 19:33:03.498429  205913 logs.go:282] 0 containers: []
	W0408 19:33:03.498441  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:33:03.498451  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:33:03.498521  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:33:03.552432  205913 cri.go:89] found id: ""
	I0408 19:33:03.552463  205913 logs.go:282] 0 containers: []
	W0408 19:33:03.552475  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:33:03.552484  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:33:03.552565  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:33:03.605705  205913 cri.go:89] found id: ""
	I0408 19:33:03.605740  205913 logs.go:282] 0 containers: []
	W0408 19:33:03.605752  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:33:03.605761  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:33:03.605866  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:33:03.661980  205913 cri.go:89] found id: ""
	I0408 19:33:03.662017  205913 logs.go:282] 0 containers: []
	W0408 19:33:03.662025  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:33:03.662032  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:33:03.662083  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:33:03.709559  205913 cri.go:89] found id: ""
	I0408 19:33:03.709598  205913 logs.go:282] 0 containers: []
	W0408 19:33:03.709609  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:33:03.709618  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:33:03.709686  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:33:03.769923  205913 cri.go:89] found id: ""
	I0408 19:33:03.769957  205913 logs.go:282] 0 containers: []
	W0408 19:33:03.769967  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:33:03.769980  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:33:03.769998  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:33:03.891681  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:33:03.891744  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:33:03.954467  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:33:03.954505  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:33:04.015427  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:33:04.015478  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:33:04.033001  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:33:04.033059  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:33:04.134674  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:33:06.635234  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:33:06.656381  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:33:06.656465  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:33:06.703225  205913 cri.go:89] found id: ""
	I0408 19:33:06.703255  205913 logs.go:282] 0 containers: []
	W0408 19:33:06.703266  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:33:06.703275  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:33:06.703337  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:33:06.773018  205913 cri.go:89] found id: ""
	I0408 19:33:06.773050  205913 logs.go:282] 0 containers: []
	W0408 19:33:06.773062  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:33:06.773070  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:33:06.773127  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:33:06.817267  205913 cri.go:89] found id: ""
	I0408 19:33:06.817293  205913 logs.go:282] 0 containers: []
	W0408 19:33:06.817309  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:33:06.817318  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:33:06.817396  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:33:06.861142  205913 cri.go:89] found id: ""
	I0408 19:33:06.861171  205913 logs.go:282] 0 containers: []
	W0408 19:33:06.861182  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:33:06.861191  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:33:06.861264  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:33:06.899793  205913 cri.go:89] found id: ""
	I0408 19:33:06.899822  205913 logs.go:282] 0 containers: []
	W0408 19:33:06.899832  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:33:06.899840  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:33:06.899901  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:33:06.951555  205913 cri.go:89] found id: ""
	I0408 19:33:06.951579  205913 logs.go:282] 0 containers: []
	W0408 19:33:06.951593  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:33:06.951599  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:33:06.951652  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:33:07.021554  205913 cri.go:89] found id: ""
	I0408 19:33:07.021578  205913 logs.go:282] 0 containers: []
	W0408 19:33:07.021587  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:33:07.021593  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:33:07.021672  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:33:07.060550  205913 cri.go:89] found id: ""
	I0408 19:33:07.060580  205913 logs.go:282] 0 containers: []
	W0408 19:33:07.060591  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:33:07.060603  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:33:07.060617  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:33:07.083358  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:33:07.083387  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:33:07.164068  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:33:07.164099  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:33:07.164121  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:33:07.255266  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:33:07.255326  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:33:07.307171  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:33:07.307206  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:33:09.873393  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:33:09.886840  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:33:09.886919  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:33:09.920931  205913 cri.go:89] found id: ""
	I0408 19:33:09.920969  205913 logs.go:282] 0 containers: []
	W0408 19:33:09.920980  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:33:09.920989  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:33:09.921060  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:33:09.957775  205913 cri.go:89] found id: ""
	I0408 19:33:09.957806  205913 logs.go:282] 0 containers: []
	W0408 19:33:09.957818  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:33:09.957825  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:33:09.957944  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:33:09.992513  205913 cri.go:89] found id: ""
	I0408 19:33:09.992544  205913 logs.go:282] 0 containers: []
	W0408 19:33:09.992555  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:33:09.992563  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:33:09.992626  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:33:10.038582  205913 cri.go:89] found id: ""
	I0408 19:33:10.038615  205913 logs.go:282] 0 containers: []
	W0408 19:33:10.038626  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:33:10.038634  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:33:10.038715  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:33:10.076585  205913 cri.go:89] found id: ""
	I0408 19:33:10.076620  205913 logs.go:282] 0 containers: []
	W0408 19:33:10.076633  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:33:10.076641  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:33:10.076721  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:33:10.116179  205913 cri.go:89] found id: ""
	I0408 19:33:10.116215  205913 logs.go:282] 0 containers: []
	W0408 19:33:10.116229  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:33:10.116238  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:33:10.116297  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:33:10.156667  205913 cri.go:89] found id: ""
	I0408 19:33:10.156699  205913 logs.go:282] 0 containers: []
	W0408 19:33:10.156711  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:33:10.156719  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:33:10.156788  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:33:10.195662  205913 cri.go:89] found id: ""
	I0408 19:33:10.195697  205913 logs.go:282] 0 containers: []
	W0408 19:33:10.195708  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:33:10.195721  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:33:10.195737  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:33:10.210481  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:33:10.210521  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:33:10.282844  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:33:10.282868  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:33:10.282887  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:33:10.367343  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:33:10.367390  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:33:10.413851  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:33:10.413910  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:33:12.966783  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:33:12.981104  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:33:12.981181  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:33:13.018699  205913 cri.go:89] found id: ""
	I0408 19:33:13.018735  205913 logs.go:282] 0 containers: []
	W0408 19:33:13.018747  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:33:13.018757  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:33:13.018830  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:33:13.062778  205913 cri.go:89] found id: ""
	I0408 19:33:13.062812  205913 logs.go:282] 0 containers: []
	W0408 19:33:13.062824  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:33:13.062833  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:33:13.062908  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:33:13.111207  205913 cri.go:89] found id: ""
	I0408 19:33:13.111244  205913 logs.go:282] 0 containers: []
	W0408 19:33:13.111255  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:33:13.111264  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:33:13.111341  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:33:13.152865  205913 cri.go:89] found id: ""
	I0408 19:33:13.152906  205913 logs.go:282] 0 containers: []
	W0408 19:33:13.152919  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:33:13.152927  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:33:13.152992  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:33:13.201130  205913 cri.go:89] found id: ""
	I0408 19:33:13.201162  205913 logs.go:282] 0 containers: []
	W0408 19:33:13.201173  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:33:13.201182  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:33:13.201261  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:33:13.247837  205913 cri.go:89] found id: ""
	I0408 19:33:13.247866  205913 logs.go:282] 0 containers: []
	W0408 19:33:13.247874  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:33:13.247881  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:33:13.247941  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:33:13.285880  205913 cri.go:89] found id: ""
	I0408 19:33:13.285915  205913 logs.go:282] 0 containers: []
	W0408 19:33:13.285928  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:33:13.285937  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:33:13.286012  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:33:13.325258  205913 cri.go:89] found id: ""
	I0408 19:33:13.325294  205913 logs.go:282] 0 containers: []
	W0408 19:33:13.325304  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:33:13.325318  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:33:13.325332  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:33:13.408323  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:33:13.408367  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:33:13.480622  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:33:13.480669  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:33:13.499321  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:33:13.499358  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:33:13.581869  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:33:13.581896  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:33:13.581914  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:33:16.187079  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:33:16.203430  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:33:16.203504  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:33:16.253214  205913 cri.go:89] found id: ""
	I0408 19:33:16.253244  205913 logs.go:282] 0 containers: []
	W0408 19:33:16.253256  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:33:16.253265  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:33:16.253327  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:33:16.311307  205913 cri.go:89] found id: ""
	I0408 19:33:16.311335  205913 logs.go:282] 0 containers: []
	W0408 19:33:16.311345  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:33:16.311353  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:33:16.311417  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:33:16.367833  205913 cri.go:89] found id: ""
	I0408 19:33:16.367867  205913 logs.go:282] 0 containers: []
	W0408 19:33:16.367877  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:33:16.367886  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:33:16.367946  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:33:16.418688  205913 cri.go:89] found id: ""
	I0408 19:33:16.418715  205913 logs.go:282] 0 containers: []
	W0408 19:33:16.418726  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:33:16.418735  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:33:16.418798  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:33:16.469288  205913 cri.go:89] found id: ""
	I0408 19:33:16.469319  205913 logs.go:282] 0 containers: []
	W0408 19:33:16.469327  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:33:16.469334  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:33:16.469398  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:33:16.520961  205913 cri.go:89] found id: ""
	I0408 19:33:16.520992  205913 logs.go:282] 0 containers: []
	W0408 19:33:16.521003  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:33:16.521012  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:33:16.521069  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:33:16.563513  205913 cri.go:89] found id: ""
	I0408 19:33:16.563544  205913 logs.go:282] 0 containers: []
	W0408 19:33:16.563553  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:33:16.563559  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:33:16.563612  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:33:16.610216  205913 cri.go:89] found id: ""
	I0408 19:33:16.610259  205913 logs.go:282] 0 containers: []
	W0408 19:33:16.610272  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:33:16.610353  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:33:16.610399  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:33:16.627699  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:33:16.627741  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:33:16.719923  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:33:16.719948  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:33:16.719966  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:33:16.817253  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:33:16.817290  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:33:16.857476  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:33:16.857519  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:33:19.413999  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:33:19.427830  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:33:19.427917  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:33:19.462922  205913 cri.go:89] found id: ""
	I0408 19:33:19.462962  205913 logs.go:282] 0 containers: []
	W0408 19:33:19.462975  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:33:19.462985  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:33:19.463042  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:33:19.508659  205913 cri.go:89] found id: ""
	I0408 19:33:19.508691  205913 logs.go:282] 0 containers: []
	W0408 19:33:19.508703  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:33:19.508716  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:33:19.508782  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:33:19.544061  205913 cri.go:89] found id: ""
	I0408 19:33:19.544089  205913 logs.go:282] 0 containers: []
	W0408 19:33:19.544099  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:33:19.544105  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:33:19.544167  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:33:19.581759  205913 cri.go:89] found id: ""
	I0408 19:33:19.581791  205913 logs.go:282] 0 containers: []
	W0408 19:33:19.581800  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:33:19.581807  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:33:19.581882  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:33:19.619603  205913 cri.go:89] found id: ""
	I0408 19:33:19.619632  205913 logs.go:282] 0 containers: []
	W0408 19:33:19.619642  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:33:19.619651  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:33:19.619718  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:33:19.653532  205913 cri.go:89] found id: ""
	I0408 19:33:19.653565  205913 logs.go:282] 0 containers: []
	W0408 19:33:19.653576  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:33:19.653583  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:33:19.653636  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:33:19.687008  205913 cri.go:89] found id: ""
	I0408 19:33:19.687044  205913 logs.go:282] 0 containers: []
	W0408 19:33:19.687056  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:33:19.687065  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:33:19.687119  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:33:19.720924  205913 cri.go:89] found id: ""
	I0408 19:33:19.720959  205913 logs.go:282] 0 containers: []
	W0408 19:33:19.720970  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:33:19.720983  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:33:19.721000  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:33:19.734321  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:33:19.734362  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:33:19.806299  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:33:19.806328  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:33:19.806348  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:33:19.881474  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:33:19.881505  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:33:19.922659  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:33:19.922698  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:33:22.480317  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:33:22.494014  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:33:22.494109  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:33:22.529550  205913 cri.go:89] found id: ""
	I0408 19:33:22.529591  205913 logs.go:282] 0 containers: []
	W0408 19:33:22.529604  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:33:22.529613  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:33:22.529681  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:33:22.565597  205913 cri.go:89] found id: ""
	I0408 19:33:22.565628  205913 logs.go:282] 0 containers: []
	W0408 19:33:22.565637  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:33:22.565643  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:33:22.565696  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:33:22.599559  205913 cri.go:89] found id: ""
	I0408 19:33:22.599597  205913 logs.go:282] 0 containers: []
	W0408 19:33:22.599610  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:33:22.599620  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:33:22.599689  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:33:22.637534  205913 cri.go:89] found id: ""
	I0408 19:33:22.637568  205913 logs.go:282] 0 containers: []
	W0408 19:33:22.637580  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:33:22.637589  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:33:22.637666  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:33:22.674590  205913 cri.go:89] found id: ""
	I0408 19:33:22.674625  205913 logs.go:282] 0 containers: []
	W0408 19:33:22.674638  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:33:22.674649  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:33:22.674715  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:33:22.718834  205913 cri.go:89] found id: ""
	I0408 19:33:22.718872  205913 logs.go:282] 0 containers: []
	W0408 19:33:22.718883  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:33:22.718892  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:33:22.718967  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:33:22.760910  205913 cri.go:89] found id: ""
	I0408 19:33:22.760956  205913 logs.go:282] 0 containers: []
	W0408 19:33:22.760969  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:33:22.760979  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:33:22.761041  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:33:22.796358  205913 cri.go:89] found id: ""
	I0408 19:33:22.796393  205913 logs.go:282] 0 containers: []
	W0408 19:33:22.796402  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:33:22.796413  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:33:22.796424  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:33:22.842857  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:33:22.842894  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:33:22.895051  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:33:22.895099  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:33:22.908749  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:33:22.908786  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:33:22.981680  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:33:22.981704  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:33:22.981716  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:33:25.561865  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:33:25.580134  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:33:25.580202  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:33:25.635642  205913 cri.go:89] found id: ""
	I0408 19:33:25.635684  205913 logs.go:282] 0 containers: []
	W0408 19:33:25.635695  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:33:25.635704  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:33:25.635774  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:33:25.682824  205913 cri.go:89] found id: ""
	I0408 19:33:25.682857  205913 logs.go:282] 0 containers: []
	W0408 19:33:25.682870  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:33:25.682878  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:33:25.682944  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:33:25.717194  205913 cri.go:89] found id: ""
	I0408 19:33:25.717224  205913 logs.go:282] 0 containers: []
	W0408 19:33:25.717232  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:33:25.717238  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:33:25.717297  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:33:25.752126  205913 cri.go:89] found id: ""
	I0408 19:33:25.752157  205913 logs.go:282] 0 containers: []
	W0408 19:33:25.752165  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:33:25.752172  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:33:25.752239  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:33:25.791307  205913 cri.go:89] found id: ""
	I0408 19:33:25.791343  205913 logs.go:282] 0 containers: []
	W0408 19:33:25.791353  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:33:25.791360  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:33:25.791414  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:33:25.832306  205913 cri.go:89] found id: ""
	I0408 19:33:25.832344  205913 logs.go:282] 0 containers: []
	W0408 19:33:25.832355  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:33:25.832364  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:33:25.832426  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:33:25.871756  205913 cri.go:89] found id: ""
	I0408 19:33:25.871781  205913 logs.go:282] 0 containers: []
	W0408 19:33:25.871790  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:33:25.871798  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:33:25.871859  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:33:25.914496  205913 cri.go:89] found id: ""
	I0408 19:33:25.914530  205913 logs.go:282] 0 containers: []
	W0408 19:33:25.914542  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:33:25.914556  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:33:25.914572  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:33:25.971156  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:33:25.971198  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:33:25.988134  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:33:25.988189  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:33:26.063763  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:33:26.063793  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:33:26.063809  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:33:26.152058  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:33:26.152105  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 19:33:28.697579  205913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:33:28.711610  205913 kubeadm.go:597] duration metric: took 4m1.77850119s to restartPrimaryControlPlane
	W0408 19:33:28.711693  205913 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 19:33:28.711716  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 19:33:29.965199  205913 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.253461114s)
	I0408 19:33:29.965293  205913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 19:33:29.979249  205913 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 19:33:29.989368  205913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:33:29.999090  205913 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:33:29.999116  205913 kubeadm.go:157] found existing configuration files:
	
	I0408 19:33:29.999184  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:33:30.008977  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:33:30.009044  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:33:30.018708  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:33:30.028056  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:33:30.028118  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:33:30.038041  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:33:30.047543  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:33:30.047621  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:33:30.057984  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:33:30.067408  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:33:30.067482  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:33:30.077109  205913 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 19:33:30.156929  205913 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0408 19:33:30.156992  205913 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 19:33:30.298430  205913 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 19:33:30.298606  205913 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 19:33:30.298762  205913 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 19:33:30.484699  205913 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 19:33:30.488359  205913 out.go:235]   - Generating certificates and keys ...
	I0408 19:33:30.488518  205913 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 19:33:30.488634  205913 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 19:33:30.488777  205913 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 19:33:30.488893  205913 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0408 19:33:30.489030  205913 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 19:33:30.489114  205913 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0408 19:33:30.489219  205913 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0408 19:33:30.489306  205913 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0408 19:33:30.489424  205913 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 19:33:30.489540  205913 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 19:33:30.489596  205913 kubeadm.go:310] [certs] Using the existing "sa" key
	I0408 19:33:30.489681  205913 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 19:33:30.742024  205913 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 19:33:30.837729  205913 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 19:33:31.083281  205913 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 19:33:31.273487  205913 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 19:33:31.289322  205913 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 19:33:31.290267  205913 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 19:33:31.290327  205913 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 19:33:31.429720  205913 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 19:33:31.432783  205913 out.go:235]   - Booting up control plane ...
	I0408 19:33:31.432922  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 19:33:31.436034  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 19:33:31.439964  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 19:33:31.440128  205913 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 19:33:31.443076  205913 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 19:34:11.443529  205913 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0408 19:34:11.443989  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:11.444237  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:34:16.444610  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:16.444853  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:34:26.445048  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:26.445308  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:34:46.445770  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:46.446104  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:35:26.447251  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:35:26.447505  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:35:26.447529  205913 kubeadm.go:310] 
	I0408 19:35:26.447585  205913 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0408 19:35:26.447662  205913 kubeadm.go:310] 		timed out waiting for the condition
	I0408 19:35:26.447677  205913 kubeadm.go:310] 
	I0408 19:35:26.447726  205913 kubeadm.go:310] 	This error is likely caused by:
	I0408 19:35:26.447781  205913 kubeadm.go:310] 		- The kubelet is not running
	I0408 19:35:26.447887  205913 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 19:35:26.447894  205913 kubeadm.go:310] 
	I0408 19:35:26.448020  205913 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 19:35:26.448076  205913 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0408 19:35:26.448126  205913 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0408 19:35:26.448136  205913 kubeadm.go:310] 
	I0408 19:35:26.448267  205913 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 19:35:26.448411  205913 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 19:35:26.448474  205913 kubeadm.go:310] 
	I0408 19:35:26.448621  205913 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 19:35:26.448774  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 19:35:26.448915  205913 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0408 19:35:26.449049  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 19:35:26.449115  205913 kubeadm.go:310] 
	I0408 19:35:26.449270  205913 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:35:26.449395  205913 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 19:35:26.449512  205913 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0408 19:35:26.449660  205913 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 19:35:26.449711  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 19:35:26.891169  205913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 19:35:26.904909  205913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:35:26.914475  205913 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:35:26.914502  205913 kubeadm.go:157] found existing configuration files:
	
	I0408 19:35:26.914553  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:35:26.924306  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:35:26.924374  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:35:26.934487  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:35:26.944461  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:35:26.944529  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:35:26.954995  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:35:26.964855  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:35:26.964941  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:35:26.975439  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:35:26.985173  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:35:26.985239  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:35:26.995433  205913 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 19:35:27.204002  205913 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:37:22.974768  205913 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 19:37:22.974883  205913 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0408 19:37:22.976335  205913 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0408 19:37:22.976383  205913 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 19:37:22.976466  205913 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 19:37:22.976595  205913 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 19:37:22.976752  205913 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 19:37:22.976829  205913 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 19:37:22.979175  205913 out.go:235]   - Generating certificates and keys ...
	I0408 19:37:22.979274  205913 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 19:37:22.979335  205913 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 19:37:22.979409  205913 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 19:37:22.979461  205913 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0408 19:37:22.979537  205913 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 19:37:22.979599  205913 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0408 19:37:22.979653  205913 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0408 19:37:22.979723  205913 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0408 19:37:22.979801  205913 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 19:37:22.979874  205913 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 19:37:22.979909  205913 kubeadm.go:310] [certs] Using the existing "sa" key
	I0408 19:37:22.979973  205913 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 19:37:22.980044  205913 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 19:37:22.980118  205913 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 19:37:22.980189  205913 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 19:37:22.980236  205913 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 19:37:22.980358  205913 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 19:37:22.980475  205913 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 19:37:22.980538  205913 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 19:37:22.980630  205913 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 19:37:22.982169  205913 out.go:235]   - Booting up control plane ...
	I0408 19:37:22.982280  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 19:37:22.982367  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 19:37:22.982450  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 19:37:22.982565  205913 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 19:37:22.982720  205913 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 19:37:22.982764  205913 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0408 19:37:22.982823  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.982981  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983043  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.983218  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983314  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.983505  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983589  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.983784  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983874  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.984082  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.984105  205913 kubeadm.go:310] 
	I0408 19:37:22.984143  205913 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0408 19:37:22.984179  205913 kubeadm.go:310] 		timed out waiting for the condition
	I0408 19:37:22.984185  205913 kubeadm.go:310] 
	I0408 19:37:22.984216  205913 kubeadm.go:310] 	This error is likely caused by:
	I0408 19:37:22.984247  205913 kubeadm.go:310] 		- The kubelet is not running
	I0408 19:37:22.984339  205913 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 19:37:22.984346  205913 kubeadm.go:310] 
	I0408 19:37:22.984449  205913 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 19:37:22.984495  205913 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0408 19:37:22.984524  205913 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0408 19:37:22.984531  205913 kubeadm.go:310] 
	I0408 19:37:22.984627  205913 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 19:37:22.984699  205913 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 19:37:22.984706  205913 kubeadm.go:310] 
	I0408 19:37:22.984805  205913 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 19:37:22.984952  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 19:37:22.985064  205913 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0408 19:37:22.985134  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 19:37:22.985199  205913 kubeadm.go:310] 
	I0408 19:37:22.985210  205913 kubeadm.go:394] duration metric: took 7m56.100848189s to StartCluster
	I0408 19:37:22.985262  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:37:22.985318  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:37:23.020922  205913 cri.go:89] found id: ""
	I0408 19:37:23.020963  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.020980  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:37:23.020989  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:37:23.021057  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:37:23.053119  205913 cri.go:89] found id: ""
	I0408 19:37:23.053155  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.053168  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:37:23.053179  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:37:23.053251  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:37:23.085925  205913 cri.go:89] found id: ""
	I0408 19:37:23.085959  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.085968  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:37:23.085976  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:37:23.086026  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:37:23.119428  205913 cri.go:89] found id: ""
	I0408 19:37:23.119460  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.119472  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:37:23.119482  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:37:23.119555  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:37:23.152519  205913 cri.go:89] found id: ""
	I0408 19:37:23.152548  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.152556  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:37:23.152563  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:37:23.152616  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:37:23.185610  205913 cri.go:89] found id: ""
	I0408 19:37:23.185653  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.185660  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:37:23.185667  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:37:23.185722  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:37:23.220368  205913 cri.go:89] found id: ""
	I0408 19:37:23.220396  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.220404  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:37:23.220411  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:37:23.220465  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:37:23.253979  205913 cri.go:89] found id: ""
	I0408 19:37:23.254016  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.254029  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:37:23.254044  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:37:23.254061  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:37:23.304529  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:37:23.304574  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:37:23.318406  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:37:23.318443  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:37:23.393733  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:37:23.393774  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:37:23.393795  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:37:23.495288  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:37:23.495333  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0408 19:37:23.534511  205913 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 19:37:23.534568  205913 out.go:270] * 
	* 
	W0408 19:37:23.534629  205913 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 19:37:23.534643  205913 out.go:270] * 
	* 
	W0408 19:37:23.535480  205913 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 19:37:23.539860  205913 out.go:201] 
	W0408 19:37:23.541197  205913 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 19:37:23.541240  205913 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 19:37:23.541256  205913 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 19:37:23.542872  205913 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-257500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 2 (253.523461ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-257500 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | no-preload-552268 image list                           | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-552268                                   | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-552268                                   | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-552268                                   | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	| delete  | -p no-preload-552268                                   | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	| start   | -p newest-cni-574058 --memory=2200 --alsologtostderr   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-171742                           | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | default-k8s-diff-port-171742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | default-k8s-diff-port-171742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | default-k8s-diff-port-171742                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | default-k8s-diff-port-171742                           |                              |         |         |                     |                     |
	| image   | embed-certs-787708 image list                          | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-787708                                  | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-787708                                  | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-787708                                  | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	| delete  | -p embed-certs-787708                                  | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	| addons  | enable metrics-server -p newest-cni-574058             | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-574058                  | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-574058 --memory=2200 --alsologtostderr   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-574058 image list                           | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	| delete  | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 19:33:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 19:33:34.230845  208578 out.go:345] Setting OutFile to fd 1 ...
	I0408 19:33:34.231171  208578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:33:34.231183  208578 out.go:358] Setting ErrFile to fd 2...
	I0408 19:33:34.231190  208578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:33:34.231395  208578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 19:33:34.232008  208578 out.go:352] Setting JSON to false
	I0408 19:33:34.232967  208578 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11759,"bootTime":1744129055,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 19:33:34.233104  208578 start.go:139] virtualization: kvm guest
	I0408 19:33:34.235635  208578 out.go:177] * [newest-cni-574058] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 19:33:34.237290  208578 out.go:177]   - MINIKUBE_LOCATION=20604
	I0408 19:33:34.237318  208578 notify.go:220] Checking for updates...
	I0408 19:33:34.240155  208578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:33:34.241519  208578 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:33:34.242927  208578 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:33:34.244269  208578 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 19:33:34.245526  208578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:33:34.247349  208578 config.go:182] Loaded profile config "newest-cni-574058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:33:34.247740  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:33:34.247825  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:33:34.264063  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0408 19:33:34.264512  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:33:34.265026  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:33:34.265048  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:33:34.265428  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:33:34.265637  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:34.266022  208578 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 19:33:34.266381  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:33:34.266435  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:33:34.281881  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0408 19:33:34.282409  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:33:34.282906  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:33:34.282946  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:33:34.283346  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:33:34.283576  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:34.324342  208578 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 19:33:34.325883  208578 start.go:297] selected driver: kvm2
	I0408 19:33:34.325909  208578 start.go:901] validating driver "kvm2" against &{Name:newest-cni-574058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-574058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:33:34.326033  208578 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:33:34.326838  208578 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:33:34.326966  208578 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20604-141129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 19:33:34.345713  208578 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 19:33:34.346165  208578 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 19:33:34.346205  208578 cni.go:84] Creating CNI manager for ""
	I0408 19:33:34.346244  208578 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:33:34.346277  208578 start.go:340] cluster config:
	{Name:newest-cni-574058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-574058 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:33:34.346373  208578 iso.go:125] acquiring lock: {Name:mk6f89956dcd0ccd06b3c273592988c0e077c69a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:33:34.349587  208578 out.go:177] * Starting "newest-cni-574058" primary control-plane node in "newest-cni-574058" cluster
	I0408 19:33:34.351259  208578 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 19:33:34.351319  208578 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 19:33:34.351330  208578 cache.go:56] Caching tarball of preloaded images
	I0408 19:33:34.351437  208578 preload.go:172] Found /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 19:33:34.351449  208578 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 19:33:34.351545  208578 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/config.json ...
	I0408 19:33:34.351744  208578 start.go:360] acquireMachinesLock for newest-cni-574058: {Name:mk9f7a747fe5c51efa93431b771c455683360918 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 19:33:34.351787  208578 start.go:364] duration metric: took 21.755µs to acquireMachinesLock for "newest-cni-574058"
	I0408 19:33:34.351801  208578 start.go:96] Skipping create...Using existing machine configuration
	I0408 19:33:34.351808  208578 fix.go:54] fixHost starting: 
	I0408 19:33:34.352081  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:33:34.352121  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:33:34.368244  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37595
	I0408 19:33:34.368778  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:33:34.369316  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:33:34.369343  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:33:34.369695  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:33:34.369947  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:34.370116  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:33:34.371986  208578 fix.go:112] recreateIfNeeded on newest-cni-574058: state=Stopped err=<nil>
	I0408 19:33:34.372015  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	W0408 19:33:34.372216  208578 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 19:33:34.374462  208578 out.go:177] * Restarting existing kvm2 VM for "newest-cni-574058" ...
	I0408 19:33:34.375950  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Start
	I0408 19:33:34.376201  208578 main.go:141] libmachine: (newest-cni-574058) starting domain...
	I0408 19:33:34.376225  208578 main.go:141] libmachine: (newest-cni-574058) ensuring networks are active...
	I0408 19:33:34.377315  208578 main.go:141] libmachine: (newest-cni-574058) Ensuring network default is active
	I0408 19:33:34.377681  208578 main.go:141] libmachine: (newest-cni-574058) Ensuring network mk-newest-cni-574058 is active
	I0408 19:33:34.378244  208578 main.go:141] libmachine: (newest-cni-574058) getting domain XML...
	I0408 19:33:34.379041  208578 main.go:141] libmachine: (newest-cni-574058) creating domain...
	I0408 19:33:35.672397  208578 main.go:141] libmachine: (newest-cni-574058) waiting for IP...
	I0408 19:33:35.673656  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:35.674355  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:35.674476  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:35.674330  208614 retry.go:31] will retry after 282.726587ms: waiting for domain to come up
	I0408 19:33:35.959023  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:35.959750  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:35.959799  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:35.959723  208614 retry.go:31] will retry after 385.478621ms: waiting for domain to come up
	I0408 19:33:36.347685  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:36.348376  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:36.348396  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:36.348306  208614 retry.go:31] will retry after 404.684646ms: waiting for domain to come up
	I0408 19:33:36.755222  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:36.755863  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:36.755898  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:36.755813  208614 retry.go:31] will retry after 497.375255ms: waiting for domain to come up
	I0408 19:33:37.254683  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:37.255365  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:37.255393  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:37.255296  208614 retry.go:31] will retry after 509.338649ms: waiting for domain to come up
	I0408 19:33:37.766227  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:37.766698  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:37.766734  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:37.766633  208614 retry.go:31] will retry after 698.136327ms: waiting for domain to come up
	I0408 19:33:38.466816  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:38.467559  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:38.467591  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:38.467497  208614 retry.go:31] will retry after 904.061633ms: waiting for domain to come up
	I0408 19:33:39.373732  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:39.374424  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:39.374455  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:39.374383  208614 retry.go:31] will retry after 1.257419141s: waiting for domain to come up
	I0408 19:33:40.634215  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:40.634925  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:40.634967  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:40.634890  208614 retry.go:31] will retry after 1.399974576s: waiting for domain to come up
	I0408 19:33:42.036596  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:42.037053  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:42.037086  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:42.037022  208614 retry.go:31] will retry after 2.102706701s: waiting for domain to come up
	I0408 19:33:44.142601  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:44.143119  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:44.143148  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:44.143058  208614 retry.go:31] will retry after 1.817898038s: waiting for domain to come up
	I0408 19:33:45.963843  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:45.964510  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:45.964539  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:45.964476  208614 retry.go:31] will retry after 2.758955998s: waiting for domain to come up
	I0408 19:33:48.726573  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:48.727245  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:48.727271  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:48.727185  208614 retry.go:31] will retry after 3.898986344s: waiting for domain to come up
	I0408 19:33:52.630703  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.631576  208578 main.go:141] libmachine: (newest-cni-574058) found domain IP: 192.168.61.150
	I0408 19:33:52.631597  208578 main.go:141] libmachine: (newest-cni-574058) reserving static IP address...
	I0408 19:33:52.631608  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has current primary IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.632402  208578 main.go:141] libmachine: (newest-cni-574058) reserved static IP address 192.168.61.150 for domain newest-cni-574058
	I0408 19:33:52.632421  208578 main.go:141] libmachine: (newest-cni-574058) waiting for SSH...
	I0408 19:33:52.632458  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "newest-cni-574058", mac: "52:54:00:60:1d:f3", ip: "192.168.61.150"} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.632469  208578 main.go:141] libmachine: (newest-cni-574058) DBG | skip adding static IP to network mk-newest-cni-574058 - found existing host DHCP lease matching {name: "newest-cni-574058", mac: "52:54:00:60:1d:f3", ip: "192.168.61.150"}
	I0408 19:33:52.632479  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Getting to WaitForSSH function...
	I0408 19:33:52.635782  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.636291  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.636326  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.636496  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Using SSH client type: external
	I0408 19:33:52.636521  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Using SSH private key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa (-rw-------)
	I0408 19:33:52.636548  208578 main.go:141] libmachine: (newest-cni-574058) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 19:33:52.636562  208578 main.go:141] libmachine: (newest-cni-574058) DBG | About to run SSH command:
	I0408 19:33:52.636589  208578 main.go:141] libmachine: (newest-cni-574058) DBG | exit 0
	I0408 19:33:52.765974  208578 main.go:141] libmachine: (newest-cni-574058) DBG | SSH cmd err, output: <nil>: 
	I0408 19:33:52.766426  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetConfigRaw
	I0408 19:33:52.767016  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetIP
	I0408 19:33:52.769739  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.770168  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.770220  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.770438  208578 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/config.json ...
	I0408 19:33:52.770706  208578 machine.go:93] provisionDockerMachine start ...
	I0408 19:33:52.770731  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:52.770954  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:52.773407  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.773715  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.773750  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.773910  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:52.774110  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:52.774289  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:52.774410  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:52.774570  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:52.774811  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:52.774822  208578 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 19:33:52.890400  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 19:33:52.890431  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetMachineName
	I0408 19:33:52.890711  208578 buildroot.go:166] provisioning hostname "newest-cni-574058"
	I0408 19:33:52.890741  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetMachineName
	I0408 19:33:52.890968  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:52.894069  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.894478  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.894512  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.894708  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:52.894945  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:52.895134  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:52.895285  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:52.895477  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:52.895692  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:52.895704  208578 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-574058 && echo "newest-cni-574058" | sudo tee /etc/hostname
	I0408 19:33:53.023785  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-574058
	
	I0408 19:33:53.023825  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.027006  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.027468  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.027495  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.027741  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.027958  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.028197  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.028403  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.028589  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:53.028798  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:53.028814  208578 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-574058' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-574058/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-574058' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 19:33:53.152963  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:33:53.152997  208578 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20604-141129/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-141129/.minikube}
	I0408 19:33:53.153024  208578 buildroot.go:174] setting up certificates
	I0408 19:33:53.153038  208578 provision.go:84] configureAuth start
	I0408 19:33:53.153052  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetMachineName
	I0408 19:33:53.153364  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetIP
	I0408 19:33:53.156500  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.157007  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.157042  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.157303  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.159804  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.160264  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.160306  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.160485  208578 provision.go:143] copyHostCerts
	I0408 19:33:53.160550  208578 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem, removing ...
	I0408 19:33:53.160576  208578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem
	I0408 19:33:53.160651  208578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem (1082 bytes)
	I0408 19:33:53.160763  208578 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem, removing ...
	I0408 19:33:53.160773  208578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem
	I0408 19:33:53.160808  208578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem (1123 bytes)
	I0408 19:33:53.160885  208578 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem, removing ...
	I0408 19:33:53.160895  208578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem
	I0408 19:33:53.160928  208578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem (1679 bytes)
	I0408 19:33:53.161007  208578 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem org=jenkins.newest-cni-574058 san=[127.0.0.1 192.168.61.150 localhost minikube newest-cni-574058]
	I0408 19:33:53.270721  208578 provision.go:177] copyRemoteCerts
	I0408 19:33:53.270792  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 19:33:53.270820  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.273858  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.274374  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.274408  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.274622  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.274785  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.274944  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.275081  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:33:53.360592  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 19:33:53.386183  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 19:33:53.411315  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 19:33:53.436280  208578 provision.go:87] duration metric: took 283.223544ms to configureAuth
	I0408 19:33:53.436311  208578 buildroot.go:189] setting minikube options for container-runtime
	I0408 19:33:53.436543  208578 config.go:182] Loaded profile config "newest-cni-574058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:33:53.436621  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.439531  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.440031  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.440073  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.440215  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.440446  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.440612  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.440870  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.441064  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:53.441292  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:53.441314  208578 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 19:33:53.684339  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 19:33:53.684377  208578 machine.go:96] duration metric: took 913.653074ms to provisionDockerMachine
	I0408 19:33:53.684396  208578 start.go:293] postStartSetup for "newest-cni-574058" (driver="kvm2")
	I0408 19:33:53.684410  208578 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 19:33:53.684436  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.684808  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 19:33:53.684882  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.687947  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.688459  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.688493  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.688773  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.688991  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.689144  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.689310  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:33:53.776771  208578 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 19:33:53.780766  208578 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 19:33:53.780795  208578 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/addons for local assets ...
	I0408 19:33:53.780863  208578 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/files for local assets ...
	I0408 19:33:53.780965  208578 filesync.go:149] local asset: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem -> 1484872.pem in /etc/ssl/certs
	I0408 19:33:53.781049  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 19:33:53.790366  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:33:53.814239  208578 start.go:296] duration metric: took 129.826394ms for postStartSetup
	I0408 19:33:53.814293  208578 fix.go:56] duration metric: took 19.462483595s for fixHost
	I0408 19:33:53.814322  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.817395  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.817718  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.817745  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.817997  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.818268  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.818450  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.818601  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.818821  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:53.819040  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:53.819050  208578 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 19:33:53.930752  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744140833.903703018
	
	I0408 19:33:53.930848  208578 fix.go:216] guest clock: 1744140833.903703018
	I0408 19:33:53.930884  208578 fix.go:229] Guest: 2025-04-08 19:33:53.903703018 +0000 UTC Remote: 2025-04-08 19:33:53.814299407 +0000 UTC m=+19.623756541 (delta=89.403611ms)
	I0408 19:33:53.930915  208578 fix.go:200] guest clock delta is within tolerance: 89.403611ms
	I0408 19:33:53.930920  208578 start.go:83] releasing machines lock for "newest-cni-574058", held for 19.579124508s
	I0408 19:33:53.930947  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.931294  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetIP
	I0408 19:33:53.934215  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.934669  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.934700  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.934870  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.935387  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.935566  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.935681  208578 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 19:33:53.935726  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.935862  208578 ssh_runner.go:195] Run: cat /version.json
	I0408 19:33:53.935890  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.938632  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.938919  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.938947  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.939012  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.939145  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.939349  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.939391  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.939418  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.939520  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.939588  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.939652  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:33:53.939704  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.939819  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.939965  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:33:54.019795  208578 ssh_runner.go:195] Run: systemctl --version
	I0408 19:33:54.043888  208578 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 19:33:54.188499  208578 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 19:33:54.195169  208578 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 19:33:54.195259  208578 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 19:33:54.213485  208578 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 19:33:54.213520  208578 start.go:495] detecting cgroup driver to use...
	I0408 19:33:54.213598  208578 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 19:33:54.230566  208578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 19:33:54.245352  208578 docker.go:217] disabling cri-docker service (if available) ...
	I0408 19:33:54.245430  208578 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 19:33:54.259817  208578 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 19:33:54.273720  208578 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 19:33:54.392045  208578 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 19:33:54.542787  208578 docker.go:233] disabling docker service ...
	I0408 19:33:54.542891  208578 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 19:33:54.558897  208578 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 19:33:54.573787  208578 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 19:33:54.727894  208578 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 19:33:54.863643  208578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 19:33:54.878049  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 19:33:54.897425  208578 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 19:33:54.897490  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.908496  208578 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 19:33:54.908579  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.920364  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.932289  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.944311  208578 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 19:33:54.956493  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.968393  208578 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.987441  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.999068  208578 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 19:33:55.009771  208578 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 19:33:55.009850  208578 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 19:33:55.024523  208578 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 19:33:55.034318  208578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:33:55.166072  208578 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 19:33:55.254450  208578 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 19:33:55.254533  208578 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 19:33:55.259681  208578 start.go:563] Will wait 60s for crictl version
	I0408 19:33:55.259766  208578 ssh_runner.go:195] Run: which crictl
	I0408 19:33:55.263818  208578 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 19:33:55.301447  208578 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 19:33:55.301538  208578 ssh_runner.go:195] Run: crio --version
	I0408 19:33:55.329793  208578 ssh_runner.go:195] Run: crio --version
	I0408 19:33:55.360507  208578 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 19:33:55.362286  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetIP
	I0408 19:33:55.365032  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:55.365406  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:55.365440  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:55.365660  208578 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0408 19:33:55.370178  208578 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:33:55.385958  208578 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0408 19:33:55.387574  208578 kubeadm.go:883] updating cluster {Name:newest-cni-574058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-5
74058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 19:33:55.387726  208578 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 19:33:55.387802  208578 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:33:55.427839  208578 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0408 19:33:55.427913  208578 ssh_runner.go:195] Run: which lz4
	I0408 19:33:55.432119  208578 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 19:33:55.436471  208578 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 19:33:55.436512  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0408 19:33:56.853092  208578 crio.go:462] duration metric: took 1.420999494s to copy over tarball
	I0408 19:33:56.853206  208578 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 19:33:59.123401  208578 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.270163458s)
	I0408 19:33:59.123431  208578 crio.go:469] duration metric: took 2.27029276s to extract the tarball
	I0408 19:33:59.123439  208578 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 19:33:59.160214  208578 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:33:59.208181  208578 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 19:33:59.208217  208578 cache_images.go:84] Images are preloaded, skipping loading
	I0408 19:33:59.208226  208578 kubeadm.go:934] updating node { 192.168.61.150 8443 v1.32.2 crio true true} ...
	I0408 19:33:59.208330  208578 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-574058 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-574058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 19:33:59.208394  208578 ssh_runner.go:195] Run: crio config
	I0408 19:33:59.259080  208578 cni.go:84] Creating CNI manager for ""
	I0408 19:33:59.259105  208578 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:33:59.259117  208578 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0408 19:33:59.259139  208578 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.150 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-574058 NodeName:newest-cni-574058 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 19:33:59.259269  208578 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-574058"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.150"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.150"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 19:33:59.259340  208578 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 19:33:59.269297  208578 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 19:33:59.269396  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 19:33:59.279795  208578 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0408 19:33:59.298267  208578 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 19:33:59.317359  208578 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0408 19:33:59.338191  208578 ssh_runner.go:195] Run: grep 192.168.61.150	control-plane.minikube.internal$ /etc/hosts
	I0408 19:33:59.342078  208578 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:33:59.354471  208578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:33:59.484349  208578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:33:59.502489  208578 certs.go:68] Setting up /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058 for IP: 192.168.61.150
	I0408 19:33:59.502521  208578 certs.go:194] generating shared ca certs ...
	I0408 19:33:59.502543  208578 certs.go:226] acquiring lock for ca certs: {Name:mkd37ce74a5e6f5f5300314397402f7d571fc230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:33:59.502741  208578 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key
	I0408 19:33:59.502794  208578 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key
	I0408 19:33:59.502809  208578 certs.go:256] generating profile certs ...
	I0408 19:33:59.502923  208578 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/client.key
	I0408 19:33:59.502988  208578 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/apiserver.key.497d1bab
	I0408 19:33:59.503021  208578 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/proxy-client.key
	I0408 19:33:59.503134  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem (1338 bytes)
	W0408 19:33:59.503171  208578 certs.go:480] ignoring /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487_empty.pem, impossibly tiny 0 bytes
	I0408 19:33:59.503185  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 19:33:59.503230  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem (1082 bytes)
	I0408 19:33:59.503268  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem (1123 bytes)
	I0408 19:33:59.503286  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem (1679 bytes)
	I0408 19:33:59.503326  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:33:59.503913  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 19:33:59.554815  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 19:33:59.587696  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 19:33:59.617750  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 19:33:59.653785  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 19:33:59.686891  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 19:33:59.714216  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 19:33:59.741329  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 19:33:59.767842  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /usr/share/ca-certificates/1484872.pem (1708 bytes)
	I0408 19:33:59.793442  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 19:33:59.818756  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem --> /usr/share/ca-certificates/148487.pem (1338 bytes)
	I0408 19:33:59.845009  208578 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 19:33:59.863360  208578 ssh_runner.go:195] Run: openssl version
	I0408 19:33:59.869412  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1484872.pem && ln -fs /usr/share/ca-certificates/1484872.pem /etc/ssl/certs/1484872.pem"
	I0408 19:33:59.881065  208578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1484872.pem
	I0408 19:33:59.886169  208578 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 18:21 /usr/share/ca-certificates/1484872.pem
	I0408 19:33:59.886244  208578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1484872.pem
	I0408 19:33:59.892580  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1484872.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 19:33:59.904478  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 19:33:59.916164  208578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:33:59.921621  208578 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:33:59.921692  208578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:33:59.927944  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 19:33:59.939080  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148487.pem && ln -fs /usr/share/ca-certificates/148487.pem /etc/ssl/certs/148487.pem"
	I0408 19:33:59.950214  208578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148487.pem
	I0408 19:33:59.954814  208578 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 18:21 /usr/share/ca-certificates/148487.pem
	I0408 19:33:59.954882  208578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148487.pem
	I0408 19:33:59.960640  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/148487.pem /etc/ssl/certs/51391683.0"
	I0408 19:33:59.971958  208578 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 19:33:59.977116  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 19:33:59.983804  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 19:33:59.990483  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 19:33:59.997068  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 19:34:00.004168  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 19:34:00.010941  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 19:34:00.017644  208578 kubeadm.go:392] StartCluster: {Name:newest-cni-574058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-5740
58 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:34:00.017776  208578 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 19:34:00.017854  208578 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:34:00.055073  208578 cri.go:89] found id: ""
	I0408 19:34:00.055148  208578 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 19:34:00.065538  208578 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0408 19:34:00.065561  208578 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0408 19:34:00.065611  208578 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 19:34:00.075742  208578 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 19:34:00.076405  208578 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-574058" does not appear in /home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:34:00.076683  208578 kubeconfig.go:62] /home/jenkins/minikube-integration/20604-141129/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-574058" cluster setting kubeconfig missing "newest-cni-574058" context setting]
	I0408 19:34:00.077198  208578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/kubeconfig: {Name:mk9a380edcf1115627e95ec52acade4ebe48201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:34:00.078950  208578 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 19:34:00.088631  208578 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.150
	I0408 19:34:00.088669  208578 kubeadm.go:1160] stopping kube-system containers ...
	I0408 19:34:00.088682  208578 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 19:34:00.088743  208578 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:34:00.126373  208578 cri.go:89] found id: ""
	I0408 19:34:00.126455  208578 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 19:34:00.143354  208578 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:34:00.153546  208578 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:34:00.153569  208578 kubeadm.go:157] found existing configuration files:
	
	I0408 19:34:00.153617  208578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:34:00.163240  208578 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:34:00.163299  208578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:34:00.173240  208578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:34:00.183043  208578 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:34:00.183122  208578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:34:00.193089  208578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:34:00.202337  208578 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:34:00.202427  208578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:34:00.211522  208578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:34:00.221218  208578 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:34:00.221298  208578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:34:00.231309  208578 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 19:34:00.244340  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:00.384842  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:01.398082  208578 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.013202999s)
	I0408 19:34:01.398108  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:01.602105  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:01.682117  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:01.768287  208578 api_server.go:52] waiting for apiserver process to appear ...
	I0408 19:34:01.768387  208578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:34:02.268726  208578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:34:02.769354  208578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:34:02.787626  208578 api_server.go:72] duration metric: took 1.019343648s to wait for apiserver process to appear ...
	I0408 19:34:02.787664  208578 api_server.go:88] waiting for apiserver healthz status ...
	I0408 19:34:02.787689  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:06.115821  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 19:34:06.115871  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 19:34:06.115897  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:06.124468  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 19:34:06.124505  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 19:34:06.287840  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:06.293980  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 19:34:06.294009  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 19:34:06.788746  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:06.794938  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 19:34:06.794977  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 19:34:07.288758  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:07.295612  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 19:34:07.295653  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 19:34:07.788430  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:07.793912  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0408 19:34:07.800651  208578 api_server.go:141] control plane version: v1.32.2
	I0408 19:34:07.800686  208578 api_server.go:131] duration metric: took 5.013015214s to wait for apiserver health ...
	I0408 19:34:07.800700  208578 cni.go:84] Creating CNI manager for ""
	I0408 19:34:07.800707  208578 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:34:07.803044  208578 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 19:34:07.804846  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 19:34:07.818973  208578 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 19:34:07.841790  208578 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 19:34:07.847476  208578 system_pods.go:59] 8 kube-system pods found
	I0408 19:34:07.847517  208578 system_pods.go:61] "coredns-668d6bf9bc-7m76j" [524b8395-bc0c-4352-924b-0c167d811679] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 19:34:07.847525  208578 system_pods.go:61] "etcd-newest-cni-574058" [d8e462e3-9275-4142-afd6-985cae85ac27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 19:34:07.847547  208578 system_pods.go:61] "kube-apiserver-newest-cni-574058" [4a5eb689-2586-426b-b57f-d454a77b92b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 19:34:07.847555  208578 system_pods.go:61] "kube-controller-manager-newest-cni-574058" [85b42f9e-9ee0-44a0-88e5-b980325c56a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 19:34:07.847561  208578 system_pods.go:61] "kube-proxy-b8nhw" [bd184c46-712e-4de3-b2f0-90fc6ec055eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0408 19:34:07.847598  208578 system_pods.go:61] "kube-scheduler-newest-cni-574058" [9c61f50a-1afb-4404-970a-7c7329499058] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 19:34:07.847609  208578 system_pods.go:61] "metrics-server-f79f97bbb-krkdh" [8436d350-8ad0-4106-ba05-656a70cd1bd9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 19:34:07.847615  208578 system_pods.go:61] "storage-provisioner" [6e4061cb-7ed5-4be3-8a67-d3d60476573a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0408 19:34:07.847622  208578 system_pods.go:74] duration metric: took 5.804908ms to wait for pod list to return data ...
	I0408 19:34:07.847633  208578 node_conditions.go:102] verifying NodePressure condition ...
	I0408 19:34:07.860421  208578 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 19:34:07.860460  208578 node_conditions.go:123] node cpu capacity is 2
	I0408 19:34:07.860474  208578 node_conditions.go:105] duration metric: took 12.836545ms to run NodePressure ...
	I0408 19:34:07.860496  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:08.167428  208578 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 19:34:08.179787  208578 ops.go:34] apiserver oom_adj: -16
	I0408 19:34:08.179815  208578 kubeadm.go:597] duration metric: took 8.114247325s to restartPrimaryControlPlane
	I0408 19:34:08.179826  208578 kubeadm.go:394] duration metric: took 8.162197731s to StartCluster
	I0408 19:34:08.179854  208578 settings.go:142] acquiring lock: {Name:mk8d530f6b8ad949177759460b330a3d74710125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:34:08.180042  208578 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:34:08.181338  208578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/kubeconfig: {Name:mk9a380edcf1115627e95ec52acade4ebe48201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:34:08.181671  208578 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 19:34:08.181921  208578 config.go:182] Loaded profile config "newest-cni-574058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:34:08.181826  208578 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0408 19:34:08.182006  208578 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-574058"
	I0408 19:34:08.182028  208578 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-574058"
	W0408 19:34:08.182038  208578 addons.go:247] addon storage-provisioner should already be in state true
	I0408 19:34:08.182051  208578 addons.go:69] Setting default-storageclass=true in profile "newest-cni-574058"
	I0408 19:34:08.182074  208578 host.go:66] Checking if "newest-cni-574058" exists ...
	I0408 19:34:08.182083  208578 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-574058"
	I0408 19:34:08.182088  208578 addons.go:69] Setting dashboard=true in profile "newest-cni-574058"
	I0408 19:34:08.182105  208578 addons.go:238] Setting addon dashboard=true in "newest-cni-574058"
	W0408 19:34:08.182113  208578 addons.go:247] addon dashboard should already be in state true
	I0408 19:34:08.182131  208578 addons.go:69] Setting metrics-server=true in profile "newest-cni-574058"
	I0408 19:34:08.182169  208578 addons.go:238] Setting addon metrics-server=true in "newest-cni-574058"
	W0408 19:34:08.182186  208578 addons.go:247] addon metrics-server should already be in state true
	I0408 19:34:08.182145  208578 host.go:66] Checking if "newest-cni-574058" exists ...
	I0408 19:34:08.182385  208578 host.go:66] Checking if "newest-cni-574058" exists ...
	I0408 19:34:08.182625  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.182635  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.182780  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.182808  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.182809  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.182856  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.182860  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.182894  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.183614  208578 out.go:177] * Verifying Kubernetes components...
	I0408 19:34:08.185136  208578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:34:08.205252  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39427
	I0408 19:34:08.205269  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34243
	I0408 19:34:08.205250  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0408 19:34:08.205782  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.205862  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.205880  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.206304  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.206326  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.206463  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.206480  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.206521  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.206542  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.206772  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.206876  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.206951  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.207122  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.207475  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.207522  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.207530  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41373
	I0408 19:34:08.207780  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.207836  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.207946  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.208393  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.208416  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.208853  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.209417  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.209467  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.210319  208578 addons.go:238] Setting addon default-storageclass=true in "newest-cni-574058"
	W0408 19:34:08.210343  208578 addons.go:247] addon default-storageclass should already be in state true
	I0408 19:34:08.210375  208578 host.go:66] Checking if "newest-cni-574058" exists ...
	I0408 19:34:08.210704  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.210751  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.225440  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0408 19:34:08.225710  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34377
	I0408 19:34:08.228520  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0408 19:34:08.230369  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.230448  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.230873  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.230895  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.231050  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.231066  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.231117  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.231304  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.231477  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.231501  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.231615  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.231634  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.231727  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.232131  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.232342  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.233628  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:34:08.234100  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:34:08.234470  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:34:08.236144  208578 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:34:08.236161  208578 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 19:34:08.236145  208578 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0408 19:34:08.237353  208578 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 19:34:08.237375  208578 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 19:34:08.237433  208578 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:34:08.237448  208578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 19:34:08.237400  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:34:08.237474  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:34:08.238698  208578 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0408 19:34:08.240058  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0408 19:34:08.240080  208578 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0408 19:34:08.240105  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:34:08.241332  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.241339  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.241518  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:34:08.241547  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.241759  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:34:08.241898  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:34:08.241919  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.241954  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:34:08.242181  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:34:08.242231  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:34:08.242347  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:34:08.242391  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:34:08.242512  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:34:08.242521  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:34:08.243247  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.243599  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:34:08.243625  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.243791  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:34:08.243950  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:34:08.244122  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:34:08.244231  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:34:08.254405  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0408 19:34:08.254920  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.255483  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.255513  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.255922  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.256515  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.256572  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.273680  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35185
	I0408 19:34:08.274259  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.274762  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.274785  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.275206  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.275446  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.277473  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:34:08.277707  208578 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 19:34:08.277720  208578 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 19:34:08.277738  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:34:08.281550  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.282023  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:34:08.282070  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.282405  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:34:08.282639  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:34:08.282811  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:34:08.282957  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:34:08.427224  208578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:34:08.443994  208578 api_server.go:52] waiting for apiserver process to appear ...
	I0408 19:34:08.444087  208578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:34:08.464573  208578 api_server.go:72] duration metric: took 282.851736ms to wait for apiserver process to appear ...
	I0408 19:34:08.464606  208578 api_server.go:88] waiting for apiserver healthz status ...
	I0408 19:34:08.464631  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:08.471670  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0408 19:34:08.473124  208578 api_server.go:141] control plane version: v1.32.2
	I0408 19:34:08.473152  208578 api_server.go:131] duration metric: took 8.53801ms to wait for apiserver health ...
	I0408 19:34:08.473161  208578 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 19:34:08.480501  208578 system_pods.go:59] 8 kube-system pods found
	I0408 19:34:08.480533  208578 system_pods.go:61] "coredns-668d6bf9bc-7m76j" [524b8395-bc0c-4352-924b-0c167d811679] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 19:34:08.480541  208578 system_pods.go:61] "etcd-newest-cni-574058" [d8e462e3-9275-4142-afd6-985cae85ac27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 19:34:08.480551  208578 system_pods.go:61] "kube-apiserver-newest-cni-574058" [4a5eb689-2586-426b-b57f-d454a77b92b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 19:34:08.480559  208578 system_pods.go:61] "kube-controller-manager-newest-cni-574058" [85b42f9e-9ee0-44a0-88e5-b980325c56a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 19:34:08.480565  208578 system_pods.go:61] "kube-proxy-b8nhw" [bd184c46-712e-4de3-b2f0-90fc6ec055eb] Running
	I0408 19:34:08.480573  208578 system_pods.go:61] "kube-scheduler-newest-cni-574058" [9c61f50a-1afb-4404-970a-7c7329499058] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 19:34:08.480583  208578 system_pods.go:61] "metrics-server-f79f97bbb-krkdh" [8436d350-8ad0-4106-ba05-656a70cd1bd9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 19:34:08.480589  208578 system_pods.go:61] "storage-provisioner" [6e4061cb-7ed5-4be3-8a67-d3d60476573a] Running
	I0408 19:34:08.480619  208578 system_pods.go:74] duration metric: took 7.451617ms to wait for pod list to return data ...
	I0408 19:34:08.480627  208578 default_sa.go:34] waiting for default service account to be created ...
	I0408 19:34:08.484250  208578 default_sa.go:45] found service account: "default"
	I0408 19:34:08.484277  208578 default_sa.go:55] duration metric: took 3.643294ms for default service account to be created ...
	I0408 19:34:08.484293  208578 kubeadm.go:582] duration metric: took 302.580864ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 19:34:08.484317  208578 node_conditions.go:102] verifying NodePressure condition ...
	I0408 19:34:08.487398  208578 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 19:34:08.487426  208578 node_conditions.go:123] node cpu capacity is 2
	I0408 19:34:08.487441  208578 node_conditions.go:105] duration metric: took 3.118357ms to run NodePressure ...
	I0408 19:34:08.487461  208578 start.go:241] waiting for startup goroutines ...
	I0408 19:34:08.536933  208578 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 19:34:08.536957  208578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 19:34:08.539452  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0408 19:34:08.539479  208578 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0408 19:34:08.557315  208578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:34:08.578553  208578 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 19:34:08.578580  208578 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 19:34:08.583900  208578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 19:34:08.606686  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0408 19:34:08.606717  208578 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0408 19:34:08.645882  208578 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 19:34:08.645916  208578 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 19:34:08.656641  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0408 19:34:08.656676  208578 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0408 19:34:08.699927  208578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 19:34:08.706202  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0408 19:34:08.706227  208578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0408 19:34:08.775120  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0408 19:34:08.775154  208578 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0408 19:34:08.889009  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0408 19:34:08.889058  208578 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0408 19:34:08.981237  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0408 19:34:08.981269  208578 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0408 19:34:09.040922  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0408 19:34:09.040954  208578 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0408 19:34:09.064862  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0408 19:34:09.064889  208578 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0408 19:34:09.141240  208578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0408 19:34:10.275126  208578 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.717762904s)
	I0408 19:34:10.275206  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.275219  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.275147  208578 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.691207244s)
	I0408 19:34:10.275285  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.275304  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.275579  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.275630  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.275636  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.275644  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.275649  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.275653  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.275663  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.275675  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.275663  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.275714  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.275933  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.275990  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.276026  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.276110  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.276124  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.282287  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.282320  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.282699  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.282727  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.282737  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.346432  208578 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.646437158s)
	I0408 19:34:10.346500  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.346513  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.346895  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.346916  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.346927  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.346936  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.346954  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.347193  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.347211  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.347217  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.347242  208578 addons.go:479] Verifying addon metrics-server=true in "newest-cni-574058"
	I0408 19:34:10.900219  208578 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.758920795s)
	I0408 19:34:10.900351  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.900404  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.900746  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.900793  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.900816  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.900830  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.901113  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.901156  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.901166  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.903191  208578 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-574058 addons enable metrics-server
	
	I0408 19:34:10.905113  208578 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0408 19:34:10.906865  208578 addons.go:514] duration metric: took 2.725052548s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0408 19:34:10.906918  208578 start.go:246] waiting for cluster config update ...
	I0408 19:34:10.906936  208578 start.go:255] writing updated cluster config ...
	I0408 19:34:10.907298  208578 ssh_runner.go:195] Run: rm -f paused
	I0408 19:34:10.967232  208578 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0408 19:34:10.969649  208578 out.go:177] * Done! kubectl is now configured to use "newest-cni-574058" cluster and "default" namespace by default
	I0408 19:34:11.443529  205913 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0408 19:34:11.443989  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:11.444237  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:34:16.444610  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:16.444853  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:34:26.445048  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:26.445308  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:34:46.445770  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:46.446104  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:35:26.447251  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:35:26.447505  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:35:26.447529  205913 kubeadm.go:310] 
	I0408 19:35:26.447585  205913 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0408 19:35:26.447662  205913 kubeadm.go:310] 		timed out waiting for the condition
	I0408 19:35:26.447677  205913 kubeadm.go:310] 
	I0408 19:35:26.447726  205913 kubeadm.go:310] 	This error is likely caused by:
	I0408 19:35:26.447781  205913 kubeadm.go:310] 		- The kubelet is not running
	I0408 19:35:26.447887  205913 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 19:35:26.447894  205913 kubeadm.go:310] 
	I0408 19:35:26.448020  205913 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 19:35:26.448076  205913 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0408 19:35:26.448126  205913 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0408 19:35:26.448136  205913 kubeadm.go:310] 
	I0408 19:35:26.448267  205913 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 19:35:26.448411  205913 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 19:35:26.448474  205913 kubeadm.go:310] 
	I0408 19:35:26.448621  205913 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 19:35:26.448774  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 19:35:26.448915  205913 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0408 19:35:26.449049  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 19:35:26.449115  205913 kubeadm.go:310] 
	I0408 19:35:26.449270  205913 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:35:26.449395  205913 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 19:35:26.449512  205913 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0408 19:35:26.449660  205913 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 19:35:26.449711  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 19:35:26.891169  205913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 19:35:26.904909  205913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:35:26.914475  205913 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:35:26.914502  205913 kubeadm.go:157] found existing configuration files:
	
	I0408 19:35:26.914553  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:35:26.924306  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:35:26.924374  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:35:26.934487  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:35:26.944461  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:35:26.944529  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:35:26.954995  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:35:26.964855  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:35:26.964941  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:35:26.975439  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:35:26.985173  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:35:26.985239  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:35:26.995433  205913 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 19:35:27.204002  205913 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:37:22.974768  205913 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 19:37:22.974883  205913 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0408 19:37:22.976335  205913 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0408 19:37:22.976383  205913 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 19:37:22.976466  205913 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 19:37:22.976595  205913 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 19:37:22.976752  205913 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 19:37:22.976829  205913 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 19:37:22.979175  205913 out.go:235]   - Generating certificates and keys ...
	I0408 19:37:22.979274  205913 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 19:37:22.979335  205913 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 19:37:22.979409  205913 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 19:37:22.979461  205913 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0408 19:37:22.979537  205913 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 19:37:22.979599  205913 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0408 19:37:22.979653  205913 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0408 19:37:22.979723  205913 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0408 19:37:22.979801  205913 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 19:37:22.979874  205913 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 19:37:22.979909  205913 kubeadm.go:310] [certs] Using the existing "sa" key
	I0408 19:37:22.979973  205913 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 19:37:22.980044  205913 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 19:37:22.980118  205913 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 19:37:22.980189  205913 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 19:37:22.980236  205913 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 19:37:22.980358  205913 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 19:37:22.980475  205913 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 19:37:22.980538  205913 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 19:37:22.980630  205913 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 19:37:22.982169  205913 out.go:235]   - Booting up control plane ...
	I0408 19:37:22.982280  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 19:37:22.982367  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 19:37:22.982450  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 19:37:22.982565  205913 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 19:37:22.982720  205913 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 19:37:22.982764  205913 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0408 19:37:22.982823  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.982981  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983043  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.983218  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983314  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.983505  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983589  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.983784  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983874  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.984082  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.984105  205913 kubeadm.go:310] 
	I0408 19:37:22.984143  205913 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0408 19:37:22.984179  205913 kubeadm.go:310] 		timed out waiting for the condition
	I0408 19:37:22.984185  205913 kubeadm.go:310] 
	I0408 19:37:22.984216  205913 kubeadm.go:310] 	This error is likely caused by:
	I0408 19:37:22.984247  205913 kubeadm.go:310] 		- The kubelet is not running
	I0408 19:37:22.984339  205913 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 19:37:22.984346  205913 kubeadm.go:310] 
	I0408 19:37:22.984449  205913 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 19:37:22.984495  205913 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0408 19:37:22.984524  205913 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0408 19:37:22.984531  205913 kubeadm.go:310] 
	I0408 19:37:22.984627  205913 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 19:37:22.984699  205913 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 19:37:22.984706  205913 kubeadm.go:310] 
	I0408 19:37:22.984805  205913 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 19:37:22.984952  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 19:37:22.985064  205913 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0408 19:37:22.985134  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 19:37:22.985199  205913 kubeadm.go:310] 
	I0408 19:37:22.985210  205913 kubeadm.go:394] duration metric: took 7m56.100848189s to StartCluster
	I0408 19:37:22.985262  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:37:22.985318  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:37:23.020922  205913 cri.go:89] found id: ""
	I0408 19:37:23.020963  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.020980  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:37:23.020989  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:37:23.021057  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:37:23.053119  205913 cri.go:89] found id: ""
	I0408 19:37:23.053155  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.053168  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:37:23.053179  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:37:23.053251  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:37:23.085925  205913 cri.go:89] found id: ""
	I0408 19:37:23.085959  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.085968  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:37:23.085976  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:37:23.086026  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:37:23.119428  205913 cri.go:89] found id: ""
	I0408 19:37:23.119460  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.119472  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:37:23.119482  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:37:23.119555  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:37:23.152519  205913 cri.go:89] found id: ""
	I0408 19:37:23.152548  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.152556  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:37:23.152563  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:37:23.152616  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:37:23.185610  205913 cri.go:89] found id: ""
	I0408 19:37:23.185653  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.185660  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:37:23.185667  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:37:23.185722  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:37:23.220368  205913 cri.go:89] found id: ""
	I0408 19:37:23.220396  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.220404  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:37:23.220411  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:37:23.220465  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:37:23.253979  205913 cri.go:89] found id: ""
	I0408 19:37:23.254016  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.254029  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:37:23.254044  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:37:23.254061  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:37:23.304529  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:37:23.304574  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:37:23.318406  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:37:23.318443  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:37:23.393733  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:37:23.393774  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:37:23.393795  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:37:23.495288  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:37:23.495333  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0408 19:37:23.534511  205913 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 19:37:23.534568  205913 out.go:270] * 
	W0408 19:37:23.534629  205913 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 19:37:23.534643  205913 out.go:270] * 
	W0408 19:37:23.535480  205913 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 19:37:23.539860  205913 out.go:201] 
	W0408 19:37:23.541197  205913 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 19:37:23.541240  205913 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 19:37:23.541256  205913 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 19:37:23.542872  205913 out.go:201] 
	
	
	==> CRI-O <==
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.545736608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744141044545715581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99d81afd-416c-42a9-82f1-bbceb8af38ac name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.546334800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1a10b86-2e70-47d8-9964-6c8eed6ca9ac name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.546390484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1a10b86-2e70-47d8-9964-6c8eed6ca9ac name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.546421717Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f1a10b86-2e70-47d8-9964-6c8eed6ca9ac name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.576182426Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0db9effe-8240-42ab-925c-76e6d32ef0b6 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.576250943Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0db9effe-8240-42ab-925c-76e6d32ef0b6 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.577323279Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec54a624-20e2-41d9-b306-62e42c1355cd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.577670902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744141044577651824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec54a624-20e2-41d9-b306-62e42c1355cd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.578301185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e708af15-2fad-4b93-917e-6b53daddfe3c name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.578350783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e708af15-2fad-4b93-917e-6b53daddfe3c name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.578381145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e708af15-2fad-4b93-917e-6b53daddfe3c name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.609504754Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38823a43-1ed3-49e1-be2e-8bca036d00ac name=/runtime.v1.RuntimeService/Version
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.609577370Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38823a43-1ed3-49e1-be2e-8bca036d00ac name=/runtime.v1.RuntimeService/Version
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.610802816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab54cab7-7f63-4c39-b18c-e6152eb8bf3c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.611270516Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744141044611244947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab54cab7-7f63-4c39-b18c-e6152eb8bf3c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.611878576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e39f186-9f25-4669-b611-4898f25a6623 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.611972770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e39f186-9f25-4669-b611-4898f25a6623 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.612023389Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8e39f186-9f25-4669-b611-4898f25a6623 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.641950997Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d750425e-fdfe-44b4-b515-b8f3f0a1b800 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.642024583Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d750425e-fdfe-44b4-b515-b8f3f0a1b800 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.643295171Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b1df1ca-3460-4cd3-9fa8-4aa42c63e8df name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.643646799Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744141044643628588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b1df1ca-3460-4cd3-9fa8-4aa42c63e8df name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.644079005Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cc08aae-477a-4954-83cb-ee80ca94e60d name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.644127395Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cc08aae-477a-4954-83cb-ee80ca94e60d name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:37:24 old-k8s-version-257500 crio[629]: time="2025-04-08 19:37:24.644158769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2cc08aae-477a-4954-83cb-ee80ca94e60d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 8 19:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049597] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039830] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.124668] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.083532] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.625489] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.166101] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.061460] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064691] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.194145] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.127525] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.273689] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +7.301504] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.058099] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.817423] systemd-fstab-generator[1002]: Ignoring "noauto" option for root device
	[ +11.268261] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 8 19:33] systemd-fstab-generator[4958]: Ignoring "noauto" option for root device
	[Apr 8 19:35] systemd-fstab-generator[5233]: Ignoring "noauto" option for root device
	[  +0.061975] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:37:24 up 8 min,  0 users,  load average: 0.00, 0.05, 0.02
	Linux old-k8s-version-257500 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0003211a0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000669f50, 0x24, 0x0, ...)
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]: net.(*Dialer).DialContext(0xc0001fb1a0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000669f50, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000ad6a60, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000669f50, 0x24, 0x60, 0x7f696d04d4f8, 0x118, ...)
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]: net/http.(*Transport).dial(0xc000852000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000669f50, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]: net/http.(*Transport).dialConn(0xc000852000, 0x4f7fe00, 0xc000052030, 0x0, 0xc0003ce600, 0x5, 0xc000669f50, 0x24, 0x0, 0xc000a71680, ...)
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]: net/http.(*Transport).dialConnFor(0xc000852000, 0xc000766210)
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]: created by net/http.(*Transport).queueForDial
	Apr 08 19:37:22 old-k8s-version-257500 kubelet[5415]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 08 19:37:22 old-k8s-version-257500 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 08 19:37:22 old-k8s-version-257500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 08 19:37:23 old-k8s-version-257500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 08 19:37:23 old-k8s-version-257500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 08 19:37:23 old-k8s-version-257500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 08 19:37:23 old-k8s-version-257500 kubelet[5482]: I0408 19:37:23.655807    5482 server.go:416] Version: v1.20.0
	Apr 08 19:37:23 old-k8s-version-257500 kubelet[5482]: I0408 19:37:23.656213    5482 server.go:837] Client rotation is on, will bootstrap in background
	Apr 08 19:37:23 old-k8s-version-257500 kubelet[5482]: I0408 19:37:23.658390    5482 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 08 19:37:23 old-k8s-version-257500 kubelet[5482]: W0408 19:37:23.659498    5482 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 08 19:37:23 old-k8s-version-257500 kubelet[5482]: I0408 19:37:23.659756    5482 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257500 -n old-k8s-version-257500
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 2 (244.470381ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-257500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (507.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:37:25.912446  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:37:43.729870  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:38:22.311274  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:38:30.239445  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:38:41.563695  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/default-k8s-diff-port-171742/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:38:56.745576  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:39:37.916497  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/no-preload-552268/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:40:05.619915  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/no-preload-552268/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:40:19.901947  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:40:39.643838  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:40:54.136542  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:40:57.701614  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/default-k8s-diff-port-171742/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:41:25.405877  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/default-k8s-diff-port-171742/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:41:33.998814  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:42:02.712275  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:42:17.201112  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:42:25.911730  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:42:43.730136  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:42:57.064059  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:43:22.311648  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:43:22.988816  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:43:30.238393  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:43:48.977340  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:43:56.745996  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:44:06.797386  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:44:37.916388  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/no-preload-552268/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:44:45.376587  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:45:19.812028  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:45:19.901730  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:45:39.644321  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:45:54.136363  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:45:57.701307  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/default-k8s-diff-port-171742/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257500 -n old-k8s-version-257500
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 2 (257.67273ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-257500" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 2 (256.976212ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-257500 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | no-preload-552268 image list                           | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-552268                                   | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-552268                                   | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-552268                                   | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	| delete  | -p no-preload-552268                                   | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	| start   | -p newest-cni-574058 --memory=2200 --alsologtostderr   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-171742                           | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | default-k8s-diff-port-171742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | default-k8s-diff-port-171742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | default-k8s-diff-port-171742                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | default-k8s-diff-port-171742                           |                              |         |         |                     |                     |
	| image   | embed-certs-787708 image list                          | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-787708                                  | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-787708                                  | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-787708                                  | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	| delete  | -p embed-certs-787708                                  | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	| addons  | enable metrics-server -p newest-cni-574058             | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-574058                  | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-574058 --memory=2200 --alsologtostderr   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-574058 image list                           | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	| delete  | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 19:33:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 19:33:34.230845  208578 out.go:345] Setting OutFile to fd 1 ...
	I0408 19:33:34.231171  208578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:33:34.231183  208578 out.go:358] Setting ErrFile to fd 2...
	I0408 19:33:34.231190  208578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:33:34.231395  208578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 19:33:34.232008  208578 out.go:352] Setting JSON to false
	I0408 19:33:34.232967  208578 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11759,"bootTime":1744129055,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 19:33:34.233104  208578 start.go:139] virtualization: kvm guest
	I0408 19:33:34.235635  208578 out.go:177] * [newest-cni-574058] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 19:33:34.237290  208578 out.go:177]   - MINIKUBE_LOCATION=20604
	I0408 19:33:34.237318  208578 notify.go:220] Checking for updates...
	I0408 19:33:34.240155  208578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:33:34.241519  208578 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:33:34.242927  208578 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:33:34.244269  208578 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 19:33:34.245526  208578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:33:34.247349  208578 config.go:182] Loaded profile config "newest-cni-574058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:33:34.247740  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:33:34.247825  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:33:34.264063  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0408 19:33:34.264512  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:33:34.265026  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:33:34.265048  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:33:34.265428  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:33:34.265637  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:34.266022  208578 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 19:33:34.266381  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:33:34.266435  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:33:34.281881  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0408 19:33:34.282409  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:33:34.282906  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:33:34.282946  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:33:34.283346  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:33:34.283576  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:34.324342  208578 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 19:33:34.325883  208578 start.go:297] selected driver: kvm2
	I0408 19:33:34.325909  208578 start.go:901] validating driver "kvm2" against &{Name:newest-cni-574058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-574058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:33:34.326033  208578 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:33:34.326838  208578 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:33:34.326966  208578 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20604-141129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 19:33:34.345713  208578 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 19:33:34.346165  208578 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 19:33:34.346205  208578 cni.go:84] Creating CNI manager for ""
	I0408 19:33:34.346244  208578 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:33:34.346277  208578 start.go:340] cluster config:
	{Name:newest-cni-574058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-574058 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:33:34.346373  208578 iso.go:125] acquiring lock: {Name:mk6f89956dcd0ccd06b3c273592988c0e077c69a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:33:34.349587  208578 out.go:177] * Starting "newest-cni-574058" primary control-plane node in "newest-cni-574058" cluster
	I0408 19:33:34.351259  208578 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 19:33:34.351319  208578 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 19:33:34.351330  208578 cache.go:56] Caching tarball of preloaded images
	I0408 19:33:34.351437  208578 preload.go:172] Found /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 19:33:34.351449  208578 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 19:33:34.351545  208578 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/config.json ...
	I0408 19:33:34.351744  208578 start.go:360] acquireMachinesLock for newest-cni-574058: {Name:mk9f7a747fe5c51efa93431b771c455683360918 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 19:33:34.351787  208578 start.go:364] duration metric: took 21.755µs to acquireMachinesLock for "newest-cni-574058"
	I0408 19:33:34.351801  208578 start.go:96] Skipping create...Using existing machine configuration
	I0408 19:33:34.351808  208578 fix.go:54] fixHost starting: 
	I0408 19:33:34.352081  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:33:34.352121  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:33:34.368244  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37595
	I0408 19:33:34.368778  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:33:34.369316  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:33:34.369343  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:33:34.369695  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:33:34.369947  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:34.370116  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:33:34.371986  208578 fix.go:112] recreateIfNeeded on newest-cni-574058: state=Stopped err=<nil>
	I0408 19:33:34.372015  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	W0408 19:33:34.372216  208578 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 19:33:34.374462  208578 out.go:177] * Restarting existing kvm2 VM for "newest-cni-574058" ...
	I0408 19:33:34.375950  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Start
	I0408 19:33:34.376201  208578 main.go:141] libmachine: (newest-cni-574058) starting domain...
	I0408 19:33:34.376225  208578 main.go:141] libmachine: (newest-cni-574058) ensuring networks are active...
	I0408 19:33:34.377315  208578 main.go:141] libmachine: (newest-cni-574058) Ensuring network default is active
	I0408 19:33:34.377681  208578 main.go:141] libmachine: (newest-cni-574058) Ensuring network mk-newest-cni-574058 is active
	I0408 19:33:34.378244  208578 main.go:141] libmachine: (newest-cni-574058) getting domain XML...
	I0408 19:33:34.379041  208578 main.go:141] libmachine: (newest-cni-574058) creating domain...
	I0408 19:33:35.672397  208578 main.go:141] libmachine: (newest-cni-574058) waiting for IP...
	I0408 19:33:35.673656  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:35.674355  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:35.674476  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:35.674330  208614 retry.go:31] will retry after 282.726587ms: waiting for domain to come up
	I0408 19:33:35.959023  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:35.959750  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:35.959799  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:35.959723  208614 retry.go:31] will retry after 385.478621ms: waiting for domain to come up
	I0408 19:33:36.347685  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:36.348376  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:36.348396  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:36.348306  208614 retry.go:31] will retry after 404.684646ms: waiting for domain to come up
	I0408 19:33:36.755222  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:36.755863  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:36.755898  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:36.755813  208614 retry.go:31] will retry after 497.375255ms: waiting for domain to come up
	I0408 19:33:37.254683  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:37.255365  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:37.255393  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:37.255296  208614 retry.go:31] will retry after 509.338649ms: waiting for domain to come up
	I0408 19:33:37.766227  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:37.766698  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:37.766734  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:37.766633  208614 retry.go:31] will retry after 698.136327ms: waiting for domain to come up
	I0408 19:33:38.466816  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:38.467559  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:38.467591  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:38.467497  208614 retry.go:31] will retry after 904.061633ms: waiting for domain to come up
	I0408 19:33:39.373732  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:39.374424  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:39.374455  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:39.374383  208614 retry.go:31] will retry after 1.257419141s: waiting for domain to come up
	I0408 19:33:40.634215  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:40.634925  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:40.634967  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:40.634890  208614 retry.go:31] will retry after 1.399974576s: waiting for domain to come up
	I0408 19:33:42.036596  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:42.037053  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:42.037086  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:42.037022  208614 retry.go:31] will retry after 2.102706701s: waiting for domain to come up
	I0408 19:33:44.142601  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:44.143119  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:44.143148  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:44.143058  208614 retry.go:31] will retry after 1.817898038s: waiting for domain to come up
	I0408 19:33:45.963843  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:45.964510  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:45.964539  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:45.964476  208614 retry.go:31] will retry after 2.758955998s: waiting for domain to come up
	I0408 19:33:48.726573  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:48.727245  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:48.727271  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:48.727185  208614 retry.go:31] will retry after 3.898986344s: waiting for domain to come up
	I0408 19:33:52.630703  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.631576  208578 main.go:141] libmachine: (newest-cni-574058) found domain IP: 192.168.61.150
	I0408 19:33:52.631597  208578 main.go:141] libmachine: (newest-cni-574058) reserving static IP address...
	I0408 19:33:52.631608  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has current primary IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.632402  208578 main.go:141] libmachine: (newest-cni-574058) reserved static IP address 192.168.61.150 for domain newest-cni-574058
	I0408 19:33:52.632421  208578 main.go:141] libmachine: (newest-cni-574058) waiting for SSH...
	I0408 19:33:52.632458  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "newest-cni-574058", mac: "52:54:00:60:1d:f3", ip: "192.168.61.150"} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.632469  208578 main.go:141] libmachine: (newest-cni-574058) DBG | skip adding static IP to network mk-newest-cni-574058 - found existing host DHCP lease matching {name: "newest-cni-574058", mac: "52:54:00:60:1d:f3", ip: "192.168.61.150"}
	I0408 19:33:52.632479  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Getting to WaitForSSH function...
	I0408 19:33:52.635782  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.636291  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.636326  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.636496  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Using SSH client type: external
	I0408 19:33:52.636521  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Using SSH private key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa (-rw-------)
	I0408 19:33:52.636548  208578 main.go:141] libmachine: (newest-cni-574058) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 19:33:52.636562  208578 main.go:141] libmachine: (newest-cni-574058) DBG | About to run SSH command:
	I0408 19:33:52.636589  208578 main.go:141] libmachine: (newest-cni-574058) DBG | exit 0
	I0408 19:33:52.765974  208578 main.go:141] libmachine: (newest-cni-574058) DBG | SSH cmd err, output: <nil>: 
	I0408 19:33:52.766426  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetConfigRaw
	I0408 19:33:52.767016  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetIP
	I0408 19:33:52.769739  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.770168  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.770220  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.770438  208578 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/config.json ...
	I0408 19:33:52.770706  208578 machine.go:93] provisionDockerMachine start ...
	I0408 19:33:52.770731  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:52.770954  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:52.773407  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.773715  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.773750  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.773910  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:52.774110  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:52.774289  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:52.774410  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:52.774570  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:52.774811  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:52.774822  208578 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 19:33:52.890400  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 19:33:52.890431  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetMachineName
	I0408 19:33:52.890711  208578 buildroot.go:166] provisioning hostname "newest-cni-574058"
	I0408 19:33:52.890741  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetMachineName
	I0408 19:33:52.890968  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:52.894069  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.894478  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.894512  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.894708  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:52.894945  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:52.895134  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:52.895285  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:52.895477  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:52.895692  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:52.895704  208578 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-574058 && echo "newest-cni-574058" | sudo tee /etc/hostname
	I0408 19:33:53.023785  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-574058
	
	I0408 19:33:53.023825  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.027006  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.027468  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.027495  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.027741  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.027958  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.028197  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.028403  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.028589  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:53.028798  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:53.028814  208578 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-574058' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-574058/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-574058' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 19:33:53.152963  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:33:53.152997  208578 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20604-141129/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-141129/.minikube}
	I0408 19:33:53.153024  208578 buildroot.go:174] setting up certificates
	I0408 19:33:53.153038  208578 provision.go:84] configureAuth start
	I0408 19:33:53.153052  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetMachineName
	I0408 19:33:53.153364  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetIP
	I0408 19:33:53.156500  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.157007  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.157042  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.157303  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.159804  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.160264  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.160306  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.160485  208578 provision.go:143] copyHostCerts
	I0408 19:33:53.160550  208578 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem, removing ...
	I0408 19:33:53.160576  208578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem
	I0408 19:33:53.160651  208578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem (1082 bytes)
	I0408 19:33:53.160763  208578 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem, removing ...
	I0408 19:33:53.160773  208578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem
	I0408 19:33:53.160808  208578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem (1123 bytes)
	I0408 19:33:53.160885  208578 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem, removing ...
	I0408 19:33:53.160895  208578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem
	I0408 19:33:53.160928  208578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem (1679 bytes)
	I0408 19:33:53.161007  208578 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem org=jenkins.newest-cni-574058 san=[127.0.0.1 192.168.61.150 localhost minikube newest-cni-574058]
	I0408 19:33:53.270721  208578 provision.go:177] copyRemoteCerts
	I0408 19:33:53.270792  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 19:33:53.270820  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.273858  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.274374  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.274408  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.274622  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.274785  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.274944  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.275081  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:33:53.360592  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 19:33:53.386183  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 19:33:53.411315  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 19:33:53.436280  208578 provision.go:87] duration metric: took 283.223544ms to configureAuth
	I0408 19:33:53.436311  208578 buildroot.go:189] setting minikube options for container-runtime
	I0408 19:33:53.436543  208578 config.go:182] Loaded profile config "newest-cni-574058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:33:53.436621  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.439531  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.440031  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.440073  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.440215  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.440446  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.440612  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.440870  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.441064  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:53.441292  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:53.441314  208578 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 19:33:53.684339  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 19:33:53.684377  208578 machine.go:96] duration metric: took 913.653074ms to provisionDockerMachine
	I0408 19:33:53.684396  208578 start.go:293] postStartSetup for "newest-cni-574058" (driver="kvm2")
	I0408 19:33:53.684410  208578 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 19:33:53.684436  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.684808  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 19:33:53.684882  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.687947  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.688459  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.688493  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.688773  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.688991  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.689144  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.689310  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:33:53.776771  208578 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 19:33:53.780766  208578 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 19:33:53.780795  208578 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/addons for local assets ...
	I0408 19:33:53.780863  208578 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/files for local assets ...
	I0408 19:33:53.780965  208578 filesync.go:149] local asset: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem -> 1484872.pem in /etc/ssl/certs
	I0408 19:33:53.781049  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 19:33:53.790366  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:33:53.814239  208578 start.go:296] duration metric: took 129.826394ms for postStartSetup
	I0408 19:33:53.814293  208578 fix.go:56] duration metric: took 19.462483595s for fixHost
	I0408 19:33:53.814322  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.817395  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.817718  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.817745  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.817997  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.818268  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.818450  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.818601  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.818821  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:53.819040  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:53.819050  208578 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 19:33:53.930752  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744140833.903703018
	
	I0408 19:33:53.930848  208578 fix.go:216] guest clock: 1744140833.903703018
	I0408 19:33:53.930884  208578 fix.go:229] Guest: 2025-04-08 19:33:53.903703018 +0000 UTC Remote: 2025-04-08 19:33:53.814299407 +0000 UTC m=+19.623756541 (delta=89.403611ms)
	I0408 19:33:53.930915  208578 fix.go:200] guest clock delta is within tolerance: 89.403611ms
	I0408 19:33:53.930920  208578 start.go:83] releasing machines lock for "newest-cni-574058", held for 19.579124508s
	I0408 19:33:53.930947  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.931294  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetIP
	I0408 19:33:53.934215  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.934669  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.934700  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.934870  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.935387  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.935566  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.935681  208578 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 19:33:53.935726  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.935862  208578 ssh_runner.go:195] Run: cat /version.json
	I0408 19:33:53.935890  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.938632  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.938919  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.938947  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.939012  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.939145  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.939349  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.939391  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.939418  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.939520  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.939588  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.939652  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:33:53.939704  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.939819  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.939965  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:33:54.019795  208578 ssh_runner.go:195] Run: systemctl --version
	I0408 19:33:54.043888  208578 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 19:33:54.188499  208578 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 19:33:54.195169  208578 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 19:33:54.195259  208578 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 19:33:54.213485  208578 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 19:33:54.213520  208578 start.go:495] detecting cgroup driver to use...
	I0408 19:33:54.213598  208578 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 19:33:54.230566  208578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 19:33:54.245352  208578 docker.go:217] disabling cri-docker service (if available) ...
	I0408 19:33:54.245430  208578 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 19:33:54.259817  208578 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 19:33:54.273720  208578 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 19:33:54.392045  208578 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 19:33:54.542787  208578 docker.go:233] disabling docker service ...
	I0408 19:33:54.542891  208578 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 19:33:54.558897  208578 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 19:33:54.573787  208578 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 19:33:54.727894  208578 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 19:33:54.863643  208578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 19:33:54.878049  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 19:33:54.897425  208578 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 19:33:54.897490  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.908496  208578 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 19:33:54.908579  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.920364  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.932289  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.944311  208578 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 19:33:54.956493  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.968393  208578 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.987441  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.999068  208578 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 19:33:55.009771  208578 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 19:33:55.009850  208578 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 19:33:55.024523  208578 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 19:33:55.034318  208578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:33:55.166072  208578 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 19:33:55.254450  208578 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 19:33:55.254533  208578 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 19:33:55.259681  208578 start.go:563] Will wait 60s for crictl version
	I0408 19:33:55.259766  208578 ssh_runner.go:195] Run: which crictl
	I0408 19:33:55.263818  208578 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 19:33:55.301447  208578 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 19:33:55.301538  208578 ssh_runner.go:195] Run: crio --version
	I0408 19:33:55.329793  208578 ssh_runner.go:195] Run: crio --version
	I0408 19:33:55.360507  208578 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 19:33:55.362286  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetIP
	I0408 19:33:55.365032  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:55.365406  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:55.365440  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:55.365660  208578 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0408 19:33:55.370178  208578 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:33:55.385958  208578 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0408 19:33:55.387574  208578 kubeadm.go:883] updating cluster {Name:newest-cni-574058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-5
74058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 19:33:55.387726  208578 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 19:33:55.387802  208578 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:33:55.427839  208578 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0408 19:33:55.427913  208578 ssh_runner.go:195] Run: which lz4
	I0408 19:33:55.432119  208578 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 19:33:55.436471  208578 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 19:33:55.436512  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0408 19:33:56.853092  208578 crio.go:462] duration metric: took 1.420999494s to copy over tarball
	I0408 19:33:56.853206  208578 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 19:33:59.123401  208578 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.270163458s)
	I0408 19:33:59.123431  208578 crio.go:469] duration metric: took 2.27029276s to extract the tarball
	I0408 19:33:59.123439  208578 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 19:33:59.160214  208578 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:33:59.208181  208578 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 19:33:59.208217  208578 cache_images.go:84] Images are preloaded, skipping loading
	I0408 19:33:59.208226  208578 kubeadm.go:934] updating node { 192.168.61.150 8443 v1.32.2 crio true true} ...
	I0408 19:33:59.208330  208578 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-574058 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-574058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 19:33:59.208394  208578 ssh_runner.go:195] Run: crio config
	I0408 19:33:59.259080  208578 cni.go:84] Creating CNI manager for ""
	I0408 19:33:59.259105  208578 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:33:59.259117  208578 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0408 19:33:59.259139  208578 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.150 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-574058 NodeName:newest-cni-574058 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 19:33:59.259269  208578 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-574058"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.150"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.150"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 19:33:59.259340  208578 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 19:33:59.269297  208578 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 19:33:59.269396  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 19:33:59.279795  208578 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0408 19:33:59.298267  208578 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 19:33:59.317359  208578 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0408 19:33:59.338191  208578 ssh_runner.go:195] Run: grep 192.168.61.150	control-plane.minikube.internal$ /etc/hosts
	I0408 19:33:59.342078  208578 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:33:59.354471  208578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:33:59.484349  208578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:33:59.502489  208578 certs.go:68] Setting up /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058 for IP: 192.168.61.150
	I0408 19:33:59.502521  208578 certs.go:194] generating shared ca certs ...
	I0408 19:33:59.502543  208578 certs.go:226] acquiring lock for ca certs: {Name:mkd37ce74a5e6f5f5300314397402f7d571fc230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:33:59.502741  208578 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key
	I0408 19:33:59.502794  208578 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key
	I0408 19:33:59.502809  208578 certs.go:256] generating profile certs ...
	I0408 19:33:59.502923  208578 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/client.key
	I0408 19:33:59.502988  208578 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/apiserver.key.497d1bab
	I0408 19:33:59.503021  208578 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/proxy-client.key
	I0408 19:33:59.503134  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem (1338 bytes)
	W0408 19:33:59.503171  208578 certs.go:480] ignoring /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487_empty.pem, impossibly tiny 0 bytes
	I0408 19:33:59.503185  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 19:33:59.503230  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem (1082 bytes)
	I0408 19:33:59.503268  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem (1123 bytes)
	I0408 19:33:59.503286  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem (1679 bytes)
	I0408 19:33:59.503326  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:33:59.503913  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 19:33:59.554815  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 19:33:59.587696  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 19:33:59.617750  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 19:33:59.653785  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 19:33:59.686891  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 19:33:59.714216  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 19:33:59.741329  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 19:33:59.767842  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /usr/share/ca-certificates/1484872.pem (1708 bytes)
	I0408 19:33:59.793442  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 19:33:59.818756  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem --> /usr/share/ca-certificates/148487.pem (1338 bytes)
	I0408 19:33:59.845009  208578 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 19:33:59.863360  208578 ssh_runner.go:195] Run: openssl version
	I0408 19:33:59.869412  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1484872.pem && ln -fs /usr/share/ca-certificates/1484872.pem /etc/ssl/certs/1484872.pem"
	I0408 19:33:59.881065  208578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1484872.pem
	I0408 19:33:59.886169  208578 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 18:21 /usr/share/ca-certificates/1484872.pem
	I0408 19:33:59.886244  208578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1484872.pem
	I0408 19:33:59.892580  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1484872.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 19:33:59.904478  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 19:33:59.916164  208578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:33:59.921621  208578 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:33:59.921692  208578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:33:59.927944  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 19:33:59.939080  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148487.pem && ln -fs /usr/share/ca-certificates/148487.pem /etc/ssl/certs/148487.pem"
	I0408 19:33:59.950214  208578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148487.pem
	I0408 19:33:59.954814  208578 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 18:21 /usr/share/ca-certificates/148487.pem
	I0408 19:33:59.954882  208578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148487.pem
	I0408 19:33:59.960640  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/148487.pem /etc/ssl/certs/51391683.0"
	I0408 19:33:59.971958  208578 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 19:33:59.977116  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 19:33:59.983804  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 19:33:59.990483  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 19:33:59.997068  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 19:34:00.004168  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 19:34:00.010941  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 19:34:00.017644  208578 kubeadm.go:392] StartCluster: {Name:newest-cni-574058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-5740
58 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:34:00.017776  208578 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 19:34:00.017854  208578 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:34:00.055073  208578 cri.go:89] found id: ""
	I0408 19:34:00.055148  208578 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 19:34:00.065538  208578 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0408 19:34:00.065561  208578 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0408 19:34:00.065611  208578 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 19:34:00.075742  208578 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 19:34:00.076405  208578 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-574058" does not appear in /home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:34:00.076683  208578 kubeconfig.go:62] /home/jenkins/minikube-integration/20604-141129/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-574058" cluster setting kubeconfig missing "newest-cni-574058" context setting]
	I0408 19:34:00.077198  208578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/kubeconfig: {Name:mk9a380edcf1115627e95ec52acade4ebe48201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:34:00.078950  208578 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 19:34:00.088631  208578 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.150
	I0408 19:34:00.088669  208578 kubeadm.go:1160] stopping kube-system containers ...
	I0408 19:34:00.088682  208578 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 19:34:00.088743  208578 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:34:00.126373  208578 cri.go:89] found id: ""
	I0408 19:34:00.126455  208578 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 19:34:00.143354  208578 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:34:00.153546  208578 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:34:00.153569  208578 kubeadm.go:157] found existing configuration files:
	
	I0408 19:34:00.153617  208578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:34:00.163240  208578 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:34:00.163299  208578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:34:00.173240  208578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:34:00.183043  208578 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:34:00.183122  208578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:34:00.193089  208578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:34:00.202337  208578 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:34:00.202427  208578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:34:00.211522  208578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:34:00.221218  208578 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:34:00.221298  208578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:34:00.231309  208578 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 19:34:00.244340  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:00.384842  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:01.398082  208578 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.013202999s)
	I0408 19:34:01.398108  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:01.602105  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:01.682117  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:01.768287  208578 api_server.go:52] waiting for apiserver process to appear ...
	I0408 19:34:01.768387  208578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:34:02.268726  208578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:34:02.769354  208578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:34:02.787626  208578 api_server.go:72] duration metric: took 1.019343648s to wait for apiserver process to appear ...
	I0408 19:34:02.787664  208578 api_server.go:88] waiting for apiserver healthz status ...
	I0408 19:34:02.787689  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:06.115821  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 19:34:06.115871  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 19:34:06.115897  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:06.124468  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 19:34:06.124505  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 19:34:06.287840  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:06.293980  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 19:34:06.294009  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 19:34:06.788746  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:06.794938  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 19:34:06.794977  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 19:34:07.288758  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:07.295612  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 19:34:07.295653  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 19:34:07.788430  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:07.793912  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0408 19:34:07.800651  208578 api_server.go:141] control plane version: v1.32.2
	I0408 19:34:07.800686  208578 api_server.go:131] duration metric: took 5.013015214s to wait for apiserver health ...
	I0408 19:34:07.800700  208578 cni.go:84] Creating CNI manager for ""
	I0408 19:34:07.800707  208578 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:34:07.803044  208578 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 19:34:07.804846  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 19:34:07.818973  208578 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 19:34:07.841790  208578 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 19:34:07.847476  208578 system_pods.go:59] 8 kube-system pods found
	I0408 19:34:07.847517  208578 system_pods.go:61] "coredns-668d6bf9bc-7m76j" [524b8395-bc0c-4352-924b-0c167d811679] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 19:34:07.847525  208578 system_pods.go:61] "etcd-newest-cni-574058" [d8e462e3-9275-4142-afd6-985cae85ac27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 19:34:07.847547  208578 system_pods.go:61] "kube-apiserver-newest-cni-574058" [4a5eb689-2586-426b-b57f-d454a77b92b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 19:34:07.847555  208578 system_pods.go:61] "kube-controller-manager-newest-cni-574058" [85b42f9e-9ee0-44a0-88e5-b980325c56a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 19:34:07.847561  208578 system_pods.go:61] "kube-proxy-b8nhw" [bd184c46-712e-4de3-b2f0-90fc6ec055eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0408 19:34:07.847598  208578 system_pods.go:61] "kube-scheduler-newest-cni-574058" [9c61f50a-1afb-4404-970a-7c7329499058] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 19:34:07.847609  208578 system_pods.go:61] "metrics-server-f79f97bbb-krkdh" [8436d350-8ad0-4106-ba05-656a70cd1bd9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 19:34:07.847615  208578 system_pods.go:61] "storage-provisioner" [6e4061cb-7ed5-4be3-8a67-d3d60476573a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0408 19:34:07.847622  208578 system_pods.go:74] duration metric: took 5.804908ms to wait for pod list to return data ...
	I0408 19:34:07.847633  208578 node_conditions.go:102] verifying NodePressure condition ...
	I0408 19:34:07.860421  208578 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 19:34:07.860460  208578 node_conditions.go:123] node cpu capacity is 2
	I0408 19:34:07.860474  208578 node_conditions.go:105] duration metric: took 12.836545ms to run NodePressure ...
	I0408 19:34:07.860496  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:08.167428  208578 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 19:34:08.179787  208578 ops.go:34] apiserver oom_adj: -16
	I0408 19:34:08.179815  208578 kubeadm.go:597] duration metric: took 8.114247325s to restartPrimaryControlPlane
	I0408 19:34:08.179826  208578 kubeadm.go:394] duration metric: took 8.162197731s to StartCluster
	I0408 19:34:08.179854  208578 settings.go:142] acquiring lock: {Name:mk8d530f6b8ad949177759460b330a3d74710125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:34:08.180042  208578 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:34:08.181338  208578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/kubeconfig: {Name:mk9a380edcf1115627e95ec52acade4ebe48201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:34:08.181671  208578 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 19:34:08.181921  208578 config.go:182] Loaded profile config "newest-cni-574058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:34:08.181826  208578 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0408 19:34:08.182006  208578 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-574058"
	I0408 19:34:08.182028  208578 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-574058"
	W0408 19:34:08.182038  208578 addons.go:247] addon storage-provisioner should already be in state true
	I0408 19:34:08.182051  208578 addons.go:69] Setting default-storageclass=true in profile "newest-cni-574058"
	I0408 19:34:08.182074  208578 host.go:66] Checking if "newest-cni-574058" exists ...
	I0408 19:34:08.182083  208578 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-574058"
	I0408 19:34:08.182088  208578 addons.go:69] Setting dashboard=true in profile "newest-cni-574058"
	I0408 19:34:08.182105  208578 addons.go:238] Setting addon dashboard=true in "newest-cni-574058"
	W0408 19:34:08.182113  208578 addons.go:247] addon dashboard should already be in state true
	I0408 19:34:08.182131  208578 addons.go:69] Setting metrics-server=true in profile "newest-cni-574058"
	I0408 19:34:08.182169  208578 addons.go:238] Setting addon metrics-server=true in "newest-cni-574058"
	W0408 19:34:08.182186  208578 addons.go:247] addon metrics-server should already be in state true
	I0408 19:34:08.182145  208578 host.go:66] Checking if "newest-cni-574058" exists ...
	I0408 19:34:08.182385  208578 host.go:66] Checking if "newest-cni-574058" exists ...
	I0408 19:34:08.182625  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.182635  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.182780  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.182808  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.182809  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.182856  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.182860  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.182894  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.183614  208578 out.go:177] * Verifying Kubernetes components...
	I0408 19:34:08.185136  208578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:34:08.205252  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39427
	I0408 19:34:08.205269  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34243
	I0408 19:34:08.205250  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0408 19:34:08.205782  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.205862  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.205880  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.206304  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.206326  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.206463  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.206480  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.206521  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.206542  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.206772  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.206876  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.206951  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.207122  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.207475  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.207522  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.207530  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41373
	I0408 19:34:08.207780  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.207836  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.207946  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.208393  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.208416  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.208853  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.209417  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.209467  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.210319  208578 addons.go:238] Setting addon default-storageclass=true in "newest-cni-574058"
	W0408 19:34:08.210343  208578 addons.go:247] addon default-storageclass should already be in state true
	I0408 19:34:08.210375  208578 host.go:66] Checking if "newest-cni-574058" exists ...
	I0408 19:34:08.210704  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.210751  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.225440  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0408 19:34:08.225710  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34377
	I0408 19:34:08.228520  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0408 19:34:08.230369  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.230448  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.230873  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.230895  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.231050  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.231066  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.231117  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.231304  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.231477  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.231501  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.231615  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.231634  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.231727  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.232131  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.232342  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.233628  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:34:08.234100  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:34:08.234470  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:34:08.236144  208578 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:34:08.236161  208578 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 19:34:08.236145  208578 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0408 19:34:08.237353  208578 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 19:34:08.237375  208578 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 19:34:08.237433  208578 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:34:08.237448  208578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 19:34:08.237400  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:34:08.237474  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:34:08.238698  208578 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0408 19:34:08.240058  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0408 19:34:08.240080  208578 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0408 19:34:08.240105  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:34:08.241332  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.241339  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.241518  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:34:08.241547  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.241759  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:34:08.241898  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:34:08.241919  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.241954  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:34:08.242181  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:34:08.242231  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:34:08.242347  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:34:08.242391  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:34:08.242512  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:34:08.242521  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:34:08.243247  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.243599  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:34:08.243625  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.243791  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:34:08.243950  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:34:08.244122  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:34:08.244231  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:34:08.254405  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0408 19:34:08.254920  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.255483  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.255513  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.255922  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.256515  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.256572  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.273680  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35185
	I0408 19:34:08.274259  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.274762  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.274785  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.275206  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.275446  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.277473  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:34:08.277707  208578 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 19:34:08.277720  208578 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 19:34:08.277738  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:34:08.281550  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.282023  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:34:08.282070  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.282405  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:34:08.282639  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:34:08.282811  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:34:08.282957  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:34:08.427224  208578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:34:08.443994  208578 api_server.go:52] waiting for apiserver process to appear ...
	I0408 19:34:08.444087  208578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:34:08.464573  208578 api_server.go:72] duration metric: took 282.851736ms to wait for apiserver process to appear ...
	I0408 19:34:08.464606  208578 api_server.go:88] waiting for apiserver healthz status ...
	I0408 19:34:08.464631  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:08.471670  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0408 19:34:08.473124  208578 api_server.go:141] control plane version: v1.32.2
	I0408 19:34:08.473152  208578 api_server.go:131] duration metric: took 8.53801ms to wait for apiserver health ...
	I0408 19:34:08.473161  208578 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 19:34:08.480501  208578 system_pods.go:59] 8 kube-system pods found
	I0408 19:34:08.480533  208578 system_pods.go:61] "coredns-668d6bf9bc-7m76j" [524b8395-bc0c-4352-924b-0c167d811679] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 19:34:08.480541  208578 system_pods.go:61] "etcd-newest-cni-574058" [d8e462e3-9275-4142-afd6-985cae85ac27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 19:34:08.480551  208578 system_pods.go:61] "kube-apiserver-newest-cni-574058" [4a5eb689-2586-426b-b57f-d454a77b92b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 19:34:08.480559  208578 system_pods.go:61] "kube-controller-manager-newest-cni-574058" [85b42f9e-9ee0-44a0-88e5-b980325c56a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 19:34:08.480565  208578 system_pods.go:61] "kube-proxy-b8nhw" [bd184c46-712e-4de3-b2f0-90fc6ec055eb] Running
	I0408 19:34:08.480573  208578 system_pods.go:61] "kube-scheduler-newest-cni-574058" [9c61f50a-1afb-4404-970a-7c7329499058] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 19:34:08.480583  208578 system_pods.go:61] "metrics-server-f79f97bbb-krkdh" [8436d350-8ad0-4106-ba05-656a70cd1bd9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 19:34:08.480589  208578 system_pods.go:61] "storage-provisioner" [6e4061cb-7ed5-4be3-8a67-d3d60476573a] Running
	I0408 19:34:08.480619  208578 system_pods.go:74] duration metric: took 7.451617ms to wait for pod list to return data ...
	I0408 19:34:08.480627  208578 default_sa.go:34] waiting for default service account to be created ...
	I0408 19:34:08.484250  208578 default_sa.go:45] found service account: "default"
	I0408 19:34:08.484277  208578 default_sa.go:55] duration metric: took 3.643294ms for default service account to be created ...
	I0408 19:34:08.484293  208578 kubeadm.go:582] duration metric: took 302.580864ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 19:34:08.484317  208578 node_conditions.go:102] verifying NodePressure condition ...
	I0408 19:34:08.487398  208578 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 19:34:08.487426  208578 node_conditions.go:123] node cpu capacity is 2
	I0408 19:34:08.487441  208578 node_conditions.go:105] duration metric: took 3.118357ms to run NodePressure ...
	I0408 19:34:08.487461  208578 start.go:241] waiting for startup goroutines ...
	I0408 19:34:08.536933  208578 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 19:34:08.536957  208578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 19:34:08.539452  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0408 19:34:08.539479  208578 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0408 19:34:08.557315  208578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:34:08.578553  208578 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 19:34:08.578580  208578 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 19:34:08.583900  208578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 19:34:08.606686  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0408 19:34:08.606717  208578 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0408 19:34:08.645882  208578 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 19:34:08.645916  208578 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 19:34:08.656641  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0408 19:34:08.656676  208578 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0408 19:34:08.699927  208578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 19:34:08.706202  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0408 19:34:08.706227  208578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0408 19:34:08.775120  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0408 19:34:08.775154  208578 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0408 19:34:08.889009  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0408 19:34:08.889058  208578 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0408 19:34:08.981237  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0408 19:34:08.981269  208578 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0408 19:34:09.040922  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0408 19:34:09.040954  208578 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0408 19:34:09.064862  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0408 19:34:09.064889  208578 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0408 19:34:09.141240  208578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0408 19:34:10.275126  208578 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.717762904s)
	I0408 19:34:10.275206  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.275219  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.275147  208578 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.691207244s)
	I0408 19:34:10.275285  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.275304  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.275579  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.275630  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.275636  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.275644  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.275649  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.275653  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.275663  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.275675  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.275663  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.275714  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.275933  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.275990  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.276026  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.276110  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.276124  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.282287  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.282320  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.282699  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.282727  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.282737  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.346432  208578 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.646437158s)
	I0408 19:34:10.346500  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.346513  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.346895  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.346916  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.346927  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.346936  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.346954  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.347193  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.347211  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.347217  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.347242  208578 addons.go:479] Verifying addon metrics-server=true in "newest-cni-574058"
	I0408 19:34:10.900219  208578 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.758920795s)
	I0408 19:34:10.900351  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.900404  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.900746  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.900793  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.900816  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.900830  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.901113  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.901156  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.901166  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.903191  208578 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-574058 addons enable metrics-server
	
	I0408 19:34:10.905113  208578 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0408 19:34:10.906865  208578 addons.go:514] duration metric: took 2.725052548s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0408 19:34:10.906918  208578 start.go:246] waiting for cluster config update ...
	I0408 19:34:10.906936  208578 start.go:255] writing updated cluster config ...
	I0408 19:34:10.907298  208578 ssh_runner.go:195] Run: rm -f paused
	I0408 19:34:10.967232  208578 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0408 19:34:10.969649  208578 out.go:177] * Done! kubectl is now configured to use "newest-cni-574058" cluster and "default" namespace by default
	I0408 19:34:11.443529  205913 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0408 19:34:11.443989  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:11.444237  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:34:16.444610  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:16.444853  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:34:26.445048  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:26.445308  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:34:46.445770  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:46.446104  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:35:26.447251  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:35:26.447505  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:35:26.447529  205913 kubeadm.go:310] 
	I0408 19:35:26.447585  205913 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0408 19:35:26.447662  205913 kubeadm.go:310] 		timed out waiting for the condition
	I0408 19:35:26.447677  205913 kubeadm.go:310] 
	I0408 19:35:26.447726  205913 kubeadm.go:310] 	This error is likely caused by:
	I0408 19:35:26.447781  205913 kubeadm.go:310] 		- The kubelet is not running
	I0408 19:35:26.447887  205913 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 19:35:26.447894  205913 kubeadm.go:310] 
	I0408 19:35:26.448020  205913 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 19:35:26.448076  205913 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0408 19:35:26.448126  205913 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0408 19:35:26.448136  205913 kubeadm.go:310] 
	I0408 19:35:26.448267  205913 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 19:35:26.448411  205913 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 19:35:26.448474  205913 kubeadm.go:310] 
	I0408 19:35:26.448621  205913 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 19:35:26.448774  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 19:35:26.448915  205913 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0408 19:35:26.449049  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 19:35:26.449115  205913 kubeadm.go:310] 
	I0408 19:35:26.449270  205913 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:35:26.449395  205913 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 19:35:26.449512  205913 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0408 19:35:26.449660  205913 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 19:35:26.449711  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 19:35:26.891169  205913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 19:35:26.904909  205913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:35:26.914475  205913 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:35:26.914502  205913 kubeadm.go:157] found existing configuration files:
	
	I0408 19:35:26.914553  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:35:26.924306  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:35:26.924374  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:35:26.934487  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:35:26.944461  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:35:26.944529  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:35:26.954995  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:35:26.964855  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:35:26.964941  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:35:26.975439  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:35:26.985173  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:35:26.985239  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:35:26.995433  205913 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 19:35:27.204002  205913 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:37:22.974768  205913 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 19:37:22.974883  205913 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0408 19:37:22.976335  205913 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0408 19:37:22.976383  205913 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 19:37:22.976466  205913 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 19:37:22.976595  205913 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 19:37:22.976752  205913 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 19:37:22.976829  205913 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 19:37:22.979175  205913 out.go:235]   - Generating certificates and keys ...
	I0408 19:37:22.979274  205913 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 19:37:22.979335  205913 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 19:37:22.979409  205913 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 19:37:22.979461  205913 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0408 19:37:22.979537  205913 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 19:37:22.979599  205913 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0408 19:37:22.979653  205913 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0408 19:37:22.979723  205913 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0408 19:37:22.979801  205913 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 19:37:22.979874  205913 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 19:37:22.979909  205913 kubeadm.go:310] [certs] Using the existing "sa" key
	I0408 19:37:22.979973  205913 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 19:37:22.980044  205913 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 19:37:22.980118  205913 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 19:37:22.980189  205913 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 19:37:22.980236  205913 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 19:37:22.980358  205913 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 19:37:22.980475  205913 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 19:37:22.980538  205913 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 19:37:22.980630  205913 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 19:37:22.982169  205913 out.go:235]   - Booting up control plane ...
	I0408 19:37:22.982280  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 19:37:22.982367  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 19:37:22.982450  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 19:37:22.982565  205913 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 19:37:22.982720  205913 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 19:37:22.982764  205913 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0408 19:37:22.982823  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.982981  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983043  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.983218  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983314  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.983505  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983589  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.983784  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983874  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.984082  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.984105  205913 kubeadm.go:310] 
	I0408 19:37:22.984143  205913 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0408 19:37:22.984179  205913 kubeadm.go:310] 		timed out waiting for the condition
	I0408 19:37:22.984185  205913 kubeadm.go:310] 
	I0408 19:37:22.984216  205913 kubeadm.go:310] 	This error is likely caused by:
	I0408 19:37:22.984247  205913 kubeadm.go:310] 		- The kubelet is not running
	I0408 19:37:22.984339  205913 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 19:37:22.984346  205913 kubeadm.go:310] 
	I0408 19:37:22.984449  205913 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 19:37:22.984495  205913 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0408 19:37:22.984524  205913 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0408 19:37:22.984531  205913 kubeadm.go:310] 
	I0408 19:37:22.984627  205913 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 19:37:22.984699  205913 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 19:37:22.984706  205913 kubeadm.go:310] 
	I0408 19:37:22.984805  205913 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 19:37:22.984952  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 19:37:22.985064  205913 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0408 19:37:22.985134  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 19:37:22.985199  205913 kubeadm.go:310] 
	I0408 19:37:22.985210  205913 kubeadm.go:394] duration metric: took 7m56.100848189s to StartCluster
	I0408 19:37:22.985262  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:37:22.985318  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:37:23.020922  205913 cri.go:89] found id: ""
	I0408 19:37:23.020963  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.020980  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:37:23.020989  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:37:23.021057  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:37:23.053119  205913 cri.go:89] found id: ""
	I0408 19:37:23.053155  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.053168  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:37:23.053179  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:37:23.053251  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:37:23.085925  205913 cri.go:89] found id: ""
	I0408 19:37:23.085959  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.085968  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:37:23.085976  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:37:23.086026  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:37:23.119428  205913 cri.go:89] found id: ""
	I0408 19:37:23.119460  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.119472  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:37:23.119482  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:37:23.119555  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:37:23.152519  205913 cri.go:89] found id: ""
	I0408 19:37:23.152548  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.152556  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:37:23.152563  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:37:23.152616  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:37:23.185610  205913 cri.go:89] found id: ""
	I0408 19:37:23.185653  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.185660  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:37:23.185667  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:37:23.185722  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:37:23.220368  205913 cri.go:89] found id: ""
	I0408 19:37:23.220396  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.220404  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:37:23.220411  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:37:23.220465  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:37:23.253979  205913 cri.go:89] found id: ""
	I0408 19:37:23.254016  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.254029  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:37:23.254044  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:37:23.254061  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:37:23.304529  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:37:23.304574  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:37:23.318406  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:37:23.318443  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:37:23.393733  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:37:23.393774  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:37:23.393795  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:37:23.495288  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:37:23.495333  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0408 19:37:23.534511  205913 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 19:37:23.534568  205913 out.go:270] * 
	W0408 19:37:23.534629  205913 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 19:37:23.534643  205913 out.go:270] * 
	W0408 19:37:23.535480  205913 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 19:37:23.539860  205913 out.go:201] 
	W0408 19:37:23.541197  205913 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 19:37:23.541240  205913 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 19:37:23.541256  205913 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 19:37:23.542872  205913 out.go:201] 
	
	
	==> CRI-O <==
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.085875042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744141586085853744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1016c640-4182-491d-9fb9-e06ad4aceac6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.086485468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e757c462-f1ac-46b1-86c6-4c88a00d5a2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.086536639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e757c462-f1ac-46b1-86c6-4c88a00d5a2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.086570921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e757c462-f1ac-46b1-86c6-4c88a00d5a2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.119946748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d4b24f43-ea0c-4d8c-82df-34d3ea3b55d3 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.120026038Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d4b24f43-ea0c-4d8c-82df-34d3ea3b55d3 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.121425438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77ac3982-4a10-49d8-a774-344187143e18 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.121811580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744141586121790083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77ac3982-4a10-49d8-a774-344187143e18 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.122456422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be980bff-9185-4169-a38b-32051c0e1f70 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.122539150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be980bff-9185-4169-a38b-32051c0e1f70 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.122581045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=be980bff-9185-4169-a38b-32051c0e1f70 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.156538245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ade1789a-2e46-4cba-9699-91ceefdc7887 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.156640709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ade1789a-2e46-4cba-9699-91ceefdc7887 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.158085736Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a381ce9d-b769-4a1d-a0dc-53140a158c48 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.158499543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744141586158474308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a381ce9d-b769-4a1d-a0dc-53140a158c48 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.159251115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95ccb577-2ab1-412f-8fb5-8d512d9294f4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.159340048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95ccb577-2ab1-412f-8fb5-8d512d9294f4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.159383710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=95ccb577-2ab1-412f-8fb5-8d512d9294f4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.191111891Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1c1ba79-8f9a-4e55-a355-0c98bc5e7e65 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.191196132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1c1ba79-8f9a-4e55-a355-0c98bc5e7e65 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.192183426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2bc0cb1-a929-4c89-bc2a-9149320505d6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.192560448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744141586192539660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2bc0cb1-a929-4c89-bc2a-9149320505d6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.193242403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fbfcc86-9f7f-4103-8b6b-dc64365f1bcd name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.193308045Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fbfcc86-9f7f-4103-8b6b-dc64365f1bcd name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:46:26 old-k8s-version-257500 crio[629]: time="2025-04-08 19:46:26.193347093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6fbfcc86-9f7f-4103-8b6b-dc64365f1bcd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 8 19:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049597] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039830] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.124668] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.083532] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.625489] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.166101] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.061460] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064691] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.194145] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.127525] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.273689] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +7.301504] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.058099] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.817423] systemd-fstab-generator[1002]: Ignoring "noauto" option for root device
	[ +11.268261] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 8 19:33] systemd-fstab-generator[4958]: Ignoring "noauto" option for root device
	[Apr 8 19:35] systemd-fstab-generator[5233]: Ignoring "noauto" option for root device
	[  +0.061975] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:46:26 up 17 min,  0 users,  load average: 0.04, 0.07, 0.02
	Linux old-k8s-version-257500 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00025ab60, 0xc000b60120, 0xc000b60120, 0x0, 0x0)
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000b421c0)
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]: goroutine 146 [runnable]:
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000995b80, 0x1, 0x0, 0x0, 0x0, 0x0)
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000b28a80, 0x0, 0x0)
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000b421c0)
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 08 19:46:23 old-k8s-version-257500 kubelet[6416]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Apr 08 19:46:23 old-k8s-version-257500 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 08 19:46:23 old-k8s-version-257500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 08 19:46:24 old-k8s-version-257500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 08 19:46:24 old-k8s-version-257500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 08 19:46:24 old-k8s-version-257500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 08 19:46:24 old-k8s-version-257500 kubelet[6424]: I0408 19:46:24.630755    6424 server.go:416] Version: v1.20.0
	Apr 08 19:46:24 old-k8s-version-257500 kubelet[6424]: I0408 19:46:24.631106    6424 server.go:837] Client rotation is on, will bootstrap in background
	Apr 08 19:46:24 old-k8s-version-257500 kubelet[6424]: I0408 19:46:24.633137    6424 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 08 19:46:24 old-k8s-version-257500 kubelet[6424]: I0408 19:46:24.634188    6424 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 08 19:46:24 old-k8s-version-257500 kubelet[6424]: W0408 19:46:24.634195    6424 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257500 -n old-k8s-version-257500
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 2 (252.110168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-257500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (389.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:46:33.999044  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:47:25.911550  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:47:43.730611  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:48:22.311000  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:48:30.239226  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:48:56.745257  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:49:37.916396  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/no-preload-552268/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:50:19.902435  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:50:39.643899  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:50:54.136257  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:50:57.701337  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/default-k8s-diff-port-171742/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:51:00.982035  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/no-preload-552268/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:51:33.314720  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:51:33.999011  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:52:20.767468  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/default-k8s-diff-port-171742/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:52:25.911496  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
E0408 19:52:43.730414  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.192:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.192:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257500 -n old-k8s-version-257500
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 2 (237.899415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-257500" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-257500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-257500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.448µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-257500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 2 (249.538886ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-257500 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | no-preload-552268 image list                           | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-552268                                   | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-552268                                   | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-552268                                   | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	| delete  | -p no-preload-552268                                   | no-preload-552268            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:32 UTC |
	| start   | -p newest-cni-574058 --memory=2200 --alsologtostderr   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:32 UTC | 08 Apr 25 19:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-171742                           | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | default-k8s-diff-port-171742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | default-k8s-diff-port-171742                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | default-k8s-diff-port-171742                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-171742 | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | default-k8s-diff-port-171742                           |                              |         |         |                     |                     |
	| image   | embed-certs-787708 image list                          | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-787708                                  | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-787708                                  | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-787708                                  | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	| delete  | -p embed-certs-787708                                  | embed-certs-787708           | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	| addons  | enable metrics-server -p newest-cni-574058             | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-574058                  | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-574058 --memory=2200 --alsologtostderr   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:33 UTC | 08 Apr 25 19:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-574058 image list                           | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	| delete  | -p newest-cni-574058                                   | newest-cni-574058            | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 19:33:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 19:33:34.230845  208578 out.go:345] Setting OutFile to fd 1 ...
	I0408 19:33:34.231171  208578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:33:34.231183  208578 out.go:358] Setting ErrFile to fd 2...
	I0408 19:33:34.231190  208578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:33:34.231395  208578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 19:33:34.232008  208578 out.go:352] Setting JSON to false
	I0408 19:33:34.232967  208578 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11759,"bootTime":1744129055,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 19:33:34.233104  208578 start.go:139] virtualization: kvm guest
	I0408 19:33:34.235635  208578 out.go:177] * [newest-cni-574058] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 19:33:34.237290  208578 out.go:177]   - MINIKUBE_LOCATION=20604
	I0408 19:33:34.237318  208578 notify.go:220] Checking for updates...
	I0408 19:33:34.240155  208578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:33:34.241519  208578 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:33:34.242927  208578 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:33:34.244269  208578 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 19:33:34.245526  208578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:33:34.247349  208578 config.go:182] Loaded profile config "newest-cni-574058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:33:34.247740  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:33:34.247825  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:33:34.264063  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0408 19:33:34.264512  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:33:34.265026  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:33:34.265048  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:33:34.265428  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:33:34.265637  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:34.266022  208578 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 19:33:34.266381  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:33:34.266435  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:33:34.281881  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43093
	I0408 19:33:34.282409  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:33:34.282906  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:33:34.282946  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:33:34.283346  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:33:34.283576  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:34.324342  208578 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 19:33:34.325883  208578 start.go:297] selected driver: kvm2
	I0408 19:33:34.325909  208578 start.go:901] validating driver "kvm2" against &{Name:newest-cni-574058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-574058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:33:34.326033  208578 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:33:34.326838  208578 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:33:34.326966  208578 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20604-141129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 19:33:34.345713  208578 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 19:33:34.346165  208578 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 19:33:34.346205  208578 cni.go:84] Creating CNI manager for ""
	I0408 19:33:34.346244  208578 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:33:34.346277  208578 start.go:340] cluster config:
	{Name:newest-cni-574058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-574058 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:33:34.346373  208578 iso.go:125] acquiring lock: {Name:mk6f89956dcd0ccd06b3c273592988c0e077c69a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 19:33:34.349587  208578 out.go:177] * Starting "newest-cni-574058" primary control-plane node in "newest-cni-574058" cluster
	I0408 19:33:34.351259  208578 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 19:33:34.351319  208578 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 19:33:34.351330  208578 cache.go:56] Caching tarball of preloaded images
	I0408 19:33:34.351437  208578 preload.go:172] Found /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 19:33:34.351449  208578 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 19:33:34.351545  208578 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/config.json ...
	I0408 19:33:34.351744  208578 start.go:360] acquireMachinesLock for newest-cni-574058: {Name:mk9f7a747fe5c51efa93431b771c455683360918 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 19:33:34.351787  208578 start.go:364] duration metric: took 21.755µs to acquireMachinesLock for "newest-cni-574058"
	I0408 19:33:34.351801  208578 start.go:96] Skipping create...Using existing machine configuration
	I0408 19:33:34.351808  208578 fix.go:54] fixHost starting: 
	I0408 19:33:34.352081  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:33:34.352121  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:33:34.368244  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37595
	I0408 19:33:34.368778  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:33:34.369316  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:33:34.369343  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:33:34.369695  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:33:34.369947  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:34.370116  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:33:34.371986  208578 fix.go:112] recreateIfNeeded on newest-cni-574058: state=Stopped err=<nil>
	I0408 19:33:34.372015  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	W0408 19:33:34.372216  208578 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 19:33:34.374462  208578 out.go:177] * Restarting existing kvm2 VM for "newest-cni-574058" ...
	I0408 19:33:34.375950  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Start
	I0408 19:33:34.376201  208578 main.go:141] libmachine: (newest-cni-574058) starting domain...
	I0408 19:33:34.376225  208578 main.go:141] libmachine: (newest-cni-574058) ensuring networks are active...
	I0408 19:33:34.377315  208578 main.go:141] libmachine: (newest-cni-574058) Ensuring network default is active
	I0408 19:33:34.377681  208578 main.go:141] libmachine: (newest-cni-574058) Ensuring network mk-newest-cni-574058 is active
	I0408 19:33:34.378244  208578 main.go:141] libmachine: (newest-cni-574058) getting domain XML...
	I0408 19:33:34.379041  208578 main.go:141] libmachine: (newest-cni-574058) creating domain...
	I0408 19:33:35.672397  208578 main.go:141] libmachine: (newest-cni-574058) waiting for IP...
	I0408 19:33:35.673656  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:35.674355  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:35.674476  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:35.674330  208614 retry.go:31] will retry after 282.726587ms: waiting for domain to come up
	I0408 19:33:35.959023  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:35.959750  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:35.959799  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:35.959723  208614 retry.go:31] will retry after 385.478621ms: waiting for domain to come up
	I0408 19:33:36.347685  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:36.348376  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:36.348396  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:36.348306  208614 retry.go:31] will retry after 404.684646ms: waiting for domain to come up
	I0408 19:33:36.755222  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:36.755863  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:36.755898  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:36.755813  208614 retry.go:31] will retry after 497.375255ms: waiting for domain to come up
	I0408 19:33:37.254683  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:37.255365  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:37.255393  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:37.255296  208614 retry.go:31] will retry after 509.338649ms: waiting for domain to come up
	I0408 19:33:37.766227  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:37.766698  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:37.766734  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:37.766633  208614 retry.go:31] will retry after 698.136327ms: waiting for domain to come up
	I0408 19:33:38.466816  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:38.467559  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:38.467591  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:38.467497  208614 retry.go:31] will retry after 904.061633ms: waiting for domain to come up
	I0408 19:33:39.373732  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:39.374424  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:39.374455  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:39.374383  208614 retry.go:31] will retry after 1.257419141s: waiting for domain to come up
	I0408 19:33:40.634215  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:40.634925  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:40.634967  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:40.634890  208614 retry.go:31] will retry after 1.399974576s: waiting for domain to come up
	I0408 19:33:42.036596  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:42.037053  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:42.037086  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:42.037022  208614 retry.go:31] will retry after 2.102706701s: waiting for domain to come up
	I0408 19:33:44.142601  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:44.143119  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:44.143148  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:44.143058  208614 retry.go:31] will retry after 1.817898038s: waiting for domain to come up
	I0408 19:33:45.963843  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:45.964510  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:45.964539  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:45.964476  208614 retry.go:31] will retry after 2.758955998s: waiting for domain to come up
	I0408 19:33:48.726573  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:48.727245  208578 main.go:141] libmachine: (newest-cni-574058) DBG | unable to find current IP address of domain newest-cni-574058 in network mk-newest-cni-574058
	I0408 19:33:48.727271  208578 main.go:141] libmachine: (newest-cni-574058) DBG | I0408 19:33:48.727185  208614 retry.go:31] will retry after 3.898986344s: waiting for domain to come up
	I0408 19:33:52.630703  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.631576  208578 main.go:141] libmachine: (newest-cni-574058) found domain IP: 192.168.61.150
	I0408 19:33:52.631597  208578 main.go:141] libmachine: (newest-cni-574058) reserving static IP address...
	I0408 19:33:52.631608  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has current primary IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.632402  208578 main.go:141] libmachine: (newest-cni-574058) reserved static IP address 192.168.61.150 for domain newest-cni-574058
	I0408 19:33:52.632421  208578 main.go:141] libmachine: (newest-cni-574058) waiting for SSH...
	I0408 19:33:52.632458  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "newest-cni-574058", mac: "52:54:00:60:1d:f3", ip: "192.168.61.150"} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.632469  208578 main.go:141] libmachine: (newest-cni-574058) DBG | skip adding static IP to network mk-newest-cni-574058 - found existing host DHCP lease matching {name: "newest-cni-574058", mac: "52:54:00:60:1d:f3", ip: "192.168.61.150"}
	I0408 19:33:52.632479  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Getting to WaitForSSH function...
	I0408 19:33:52.635782  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.636291  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.636326  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.636496  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Using SSH client type: external
	I0408 19:33:52.636521  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Using SSH private key: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa (-rw-------)
	I0408 19:33:52.636548  208578 main.go:141] libmachine: (newest-cni-574058) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 19:33:52.636562  208578 main.go:141] libmachine: (newest-cni-574058) DBG | About to run SSH command:
	I0408 19:33:52.636589  208578 main.go:141] libmachine: (newest-cni-574058) DBG | exit 0
	I0408 19:33:52.765974  208578 main.go:141] libmachine: (newest-cni-574058) DBG | SSH cmd err, output: <nil>: 
	I0408 19:33:52.766426  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetConfigRaw
	I0408 19:33:52.767016  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetIP
	I0408 19:33:52.769739  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.770168  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.770220  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.770438  208578 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/config.json ...
	I0408 19:33:52.770706  208578 machine.go:93] provisionDockerMachine start ...
	I0408 19:33:52.770731  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:52.770954  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:52.773407  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.773715  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.773750  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.773910  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:52.774110  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:52.774289  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:52.774410  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:52.774570  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:52.774811  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:52.774822  208578 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 19:33:52.890400  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 19:33:52.890431  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetMachineName
	I0408 19:33:52.890711  208578 buildroot.go:166] provisioning hostname "newest-cni-574058"
	I0408 19:33:52.890741  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetMachineName
	I0408 19:33:52.890968  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:52.894069  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.894478  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:52.894512  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:52.894708  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:52.894945  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:52.895134  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:52.895285  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:52.895477  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:52.895692  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:52.895704  208578 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-574058 && echo "newest-cni-574058" | sudo tee /etc/hostname
	I0408 19:33:53.023785  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-574058
	
	I0408 19:33:53.023825  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.027006  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.027468  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.027495  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.027741  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.027958  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.028197  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.028403  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.028589  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:53.028798  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:53.028814  208578 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-574058' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-574058/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-574058' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 19:33:53.152963  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 19:33:53.152997  208578 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20604-141129/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-141129/.minikube}
	I0408 19:33:53.153024  208578 buildroot.go:174] setting up certificates
	I0408 19:33:53.153038  208578 provision.go:84] configureAuth start
	I0408 19:33:53.153052  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetMachineName
	I0408 19:33:53.153364  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetIP
	I0408 19:33:53.156500  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.157007  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.157042  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.157303  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.159804  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.160264  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.160306  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.160485  208578 provision.go:143] copyHostCerts
	I0408 19:33:53.160550  208578 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem, removing ...
	I0408 19:33:53.160576  208578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem
	I0408 19:33:53.160651  208578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/ca.pem (1082 bytes)
	I0408 19:33:53.160763  208578 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem, removing ...
	I0408 19:33:53.160773  208578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem
	I0408 19:33:53.160808  208578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/cert.pem (1123 bytes)
	I0408 19:33:53.160885  208578 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem, removing ...
	I0408 19:33:53.160895  208578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem
	I0408 19:33:53.160928  208578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-141129/.minikube/key.pem (1679 bytes)
	I0408 19:33:53.161007  208578 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem org=jenkins.newest-cni-574058 san=[127.0.0.1 192.168.61.150 localhost minikube newest-cni-574058]
	I0408 19:33:53.270721  208578 provision.go:177] copyRemoteCerts
	I0408 19:33:53.270792  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 19:33:53.270820  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.273858  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.274374  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.274408  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.274622  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.274785  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.274944  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.275081  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:33:53.360592  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 19:33:53.386183  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 19:33:53.411315  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 19:33:53.436280  208578 provision.go:87] duration metric: took 283.223544ms to configureAuth
	I0408 19:33:53.436311  208578 buildroot.go:189] setting minikube options for container-runtime
	I0408 19:33:53.436543  208578 config.go:182] Loaded profile config "newest-cni-574058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:33:53.436621  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.439531  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.440031  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.440073  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.440215  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.440446  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.440612  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.440870  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.441064  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:53.441292  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:53.441314  208578 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 19:33:53.684339  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 19:33:53.684377  208578 machine.go:96] duration metric: took 913.653074ms to provisionDockerMachine
	I0408 19:33:53.684396  208578 start.go:293] postStartSetup for "newest-cni-574058" (driver="kvm2")
	I0408 19:33:53.684410  208578 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 19:33:53.684436  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.684808  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 19:33:53.684882  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.687947  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.688459  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.688493  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.688773  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.688991  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.689144  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.689310  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:33:53.776771  208578 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 19:33:53.780766  208578 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 19:33:53.780795  208578 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/addons for local assets ...
	I0408 19:33:53.780863  208578 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-141129/.minikube/files for local assets ...
	I0408 19:33:53.780965  208578 filesync.go:149] local asset: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem -> 1484872.pem in /etc/ssl/certs
	I0408 19:33:53.781049  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 19:33:53.790366  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:33:53.814239  208578 start.go:296] duration metric: took 129.826394ms for postStartSetup
	I0408 19:33:53.814293  208578 fix.go:56] duration metric: took 19.462483595s for fixHost
	I0408 19:33:53.814322  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.817395  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.817718  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.817745  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.817997  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.818268  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.818450  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.818601  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.818821  208578 main.go:141] libmachine: Using SSH client type: native
	I0408 19:33:53.819040  208578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.150 22 <nil> <nil>}
	I0408 19:33:53.819050  208578 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 19:33:53.930752  208578 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744140833.903703018
	
	I0408 19:33:53.930848  208578 fix.go:216] guest clock: 1744140833.903703018
	I0408 19:33:53.930884  208578 fix.go:229] Guest: 2025-04-08 19:33:53.903703018 +0000 UTC Remote: 2025-04-08 19:33:53.814299407 +0000 UTC m=+19.623756541 (delta=89.403611ms)
	I0408 19:33:53.930915  208578 fix.go:200] guest clock delta is within tolerance: 89.403611ms
	I0408 19:33:53.930920  208578 start.go:83] releasing machines lock for "newest-cni-574058", held for 19.579124508s
	I0408 19:33:53.930947  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.931294  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetIP
	I0408 19:33:53.934215  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.934669  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.934700  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.934870  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.935387  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.935566  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:33:53.935681  208578 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 19:33:53.935726  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.935862  208578 ssh_runner.go:195] Run: cat /version.json
	I0408 19:33:53.935890  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:33:53.938632  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.938919  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.938947  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.939012  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.939145  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.939349  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.939391  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:53.939418  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:53.939520  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.939588  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:33:53.939652  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:33:53.939704  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:33:53.939819  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:33:53.939965  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:33:54.019795  208578 ssh_runner.go:195] Run: systemctl --version
	I0408 19:33:54.043888  208578 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 19:33:54.188499  208578 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 19:33:54.195169  208578 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 19:33:54.195259  208578 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 19:33:54.213485  208578 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 19:33:54.213520  208578 start.go:495] detecting cgroup driver to use...
	I0408 19:33:54.213598  208578 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 19:33:54.230566  208578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 19:33:54.245352  208578 docker.go:217] disabling cri-docker service (if available) ...
	I0408 19:33:54.245430  208578 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 19:33:54.259817  208578 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 19:33:54.273720  208578 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 19:33:54.392045  208578 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 19:33:54.542787  208578 docker.go:233] disabling docker service ...
	I0408 19:33:54.542891  208578 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 19:33:54.558897  208578 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 19:33:54.573787  208578 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 19:33:54.727894  208578 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 19:33:54.863643  208578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 19:33:54.878049  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 19:33:54.897425  208578 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0408 19:33:54.897490  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.908496  208578 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 19:33:54.908579  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.920364  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.932289  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.944311  208578 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 19:33:54.956493  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.968393  208578 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.987441  208578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 19:33:54.999068  208578 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 19:33:55.009771  208578 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 19:33:55.009850  208578 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 19:33:55.024523  208578 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 19:33:55.034318  208578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:33:55.166072  208578 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 19:33:55.254450  208578 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 19:33:55.254533  208578 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 19:33:55.259681  208578 start.go:563] Will wait 60s for crictl version
	I0408 19:33:55.259766  208578 ssh_runner.go:195] Run: which crictl
	I0408 19:33:55.263818  208578 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 19:33:55.301447  208578 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 19:33:55.301538  208578 ssh_runner.go:195] Run: crio --version
	I0408 19:33:55.329793  208578 ssh_runner.go:195] Run: crio --version
	I0408 19:33:55.360507  208578 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0408 19:33:55.362286  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetIP
	I0408 19:33:55.365032  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:55.365406  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:33:55.365440  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:33:55.365660  208578 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0408 19:33:55.370178  208578 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:33:55.385958  208578 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0408 19:33:55.387574  208578 kubeadm.go:883] updating cluster {Name:newest-cni-574058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-5
74058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 19:33:55.387726  208578 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 19:33:55.387802  208578 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:33:55.427839  208578 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0408 19:33:55.427913  208578 ssh_runner.go:195] Run: which lz4
	I0408 19:33:55.432119  208578 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 19:33:55.436471  208578 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 19:33:55.436512  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0408 19:33:56.853092  208578 crio.go:462] duration metric: took 1.420999494s to copy over tarball
	I0408 19:33:56.853206  208578 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 19:33:59.123401  208578 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.270163458s)
	I0408 19:33:59.123431  208578 crio.go:469] duration metric: took 2.27029276s to extract the tarball
	I0408 19:33:59.123439  208578 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 19:33:59.160214  208578 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 19:33:59.208181  208578 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 19:33:59.208217  208578 cache_images.go:84] Images are preloaded, skipping loading
	I0408 19:33:59.208226  208578 kubeadm.go:934] updating node { 192.168.61.150 8443 v1.32.2 crio true true} ...
	I0408 19:33:59.208330  208578 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-574058 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-574058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 19:33:59.208394  208578 ssh_runner.go:195] Run: crio config
	I0408 19:33:59.259080  208578 cni.go:84] Creating CNI manager for ""
	I0408 19:33:59.259105  208578 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:33:59.259117  208578 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0408 19:33:59.259139  208578 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.150 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-574058 NodeName:newest-cni-574058 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 19:33:59.259269  208578 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-574058"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.150"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.150"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 19:33:59.259340  208578 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 19:33:59.269297  208578 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 19:33:59.269396  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 19:33:59.279795  208578 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0408 19:33:59.298267  208578 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 19:33:59.317359  208578 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0408 19:33:59.338191  208578 ssh_runner.go:195] Run: grep 192.168.61.150	control-plane.minikube.internal$ /etc/hosts
	I0408 19:33:59.342078  208578 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 19:33:59.354471  208578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:33:59.484349  208578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:33:59.502489  208578 certs.go:68] Setting up /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058 for IP: 192.168.61.150
	I0408 19:33:59.502521  208578 certs.go:194] generating shared ca certs ...
	I0408 19:33:59.502543  208578 certs.go:226] acquiring lock for ca certs: {Name:mkd37ce74a5e6f5f5300314397402f7d571fc230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:33:59.502741  208578 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key
	I0408 19:33:59.502794  208578 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key
	I0408 19:33:59.502809  208578 certs.go:256] generating profile certs ...
	I0408 19:33:59.502923  208578 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/client.key
	I0408 19:33:59.502988  208578 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/apiserver.key.497d1bab
	I0408 19:33:59.503021  208578 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/proxy-client.key
	I0408 19:33:59.503134  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem (1338 bytes)
	W0408 19:33:59.503171  208578 certs.go:480] ignoring /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487_empty.pem, impossibly tiny 0 bytes
	I0408 19:33:59.503185  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca-key.pem (1675 bytes)
	I0408 19:33:59.503230  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/ca.pem (1082 bytes)
	I0408 19:33:59.503268  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/cert.pem (1123 bytes)
	I0408 19:33:59.503286  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/certs/key.pem (1679 bytes)
	I0408 19:33:59.503326  208578 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem (1708 bytes)
	I0408 19:33:59.503913  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 19:33:59.554815  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 19:33:59.587696  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 19:33:59.617750  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 19:33:59.653785  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 19:33:59.686891  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 19:33:59.714216  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 19:33:59.741329  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/newest-cni-574058/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 19:33:59.767842  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/ssl/certs/1484872.pem --> /usr/share/ca-certificates/1484872.pem (1708 bytes)
	I0408 19:33:59.793442  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 19:33:59.818756  208578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-141129/.minikube/certs/148487.pem --> /usr/share/ca-certificates/148487.pem (1338 bytes)
	I0408 19:33:59.845009  208578 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 19:33:59.863360  208578 ssh_runner.go:195] Run: openssl version
	I0408 19:33:59.869412  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1484872.pem && ln -fs /usr/share/ca-certificates/1484872.pem /etc/ssl/certs/1484872.pem"
	I0408 19:33:59.881065  208578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1484872.pem
	I0408 19:33:59.886169  208578 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 18:21 /usr/share/ca-certificates/1484872.pem
	I0408 19:33:59.886244  208578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1484872.pem
	I0408 19:33:59.892580  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1484872.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 19:33:59.904478  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 19:33:59.916164  208578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:33:59.921621  208578 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:33:59.921692  208578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 19:33:59.927944  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 19:33:59.939080  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148487.pem && ln -fs /usr/share/ca-certificates/148487.pem /etc/ssl/certs/148487.pem"
	I0408 19:33:59.950214  208578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148487.pem
	I0408 19:33:59.954814  208578 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 18:21 /usr/share/ca-certificates/148487.pem
	I0408 19:33:59.954882  208578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148487.pem
	I0408 19:33:59.960640  208578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/148487.pem /etc/ssl/certs/51391683.0"
	I0408 19:33:59.971958  208578 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 19:33:59.977116  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 19:33:59.983804  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 19:33:59.990483  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 19:33:59.997068  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 19:34:00.004168  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 19:34:00.010941  208578 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 19:34:00.017644  208578 kubeadm.go:392] StartCluster: {Name:newest-cni-574058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-5740
58 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 19:34:00.017776  208578 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 19:34:00.017854  208578 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:34:00.055073  208578 cri.go:89] found id: ""
	I0408 19:34:00.055148  208578 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 19:34:00.065538  208578 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0408 19:34:00.065561  208578 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0408 19:34:00.065611  208578 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 19:34:00.075742  208578 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 19:34:00.076405  208578 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-574058" does not appear in /home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:34:00.076683  208578 kubeconfig.go:62] /home/jenkins/minikube-integration/20604-141129/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-574058" cluster setting kubeconfig missing "newest-cni-574058" context setting]
	I0408 19:34:00.077198  208578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/kubeconfig: {Name:mk9a380edcf1115627e95ec52acade4ebe48201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:34:00.078950  208578 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 19:34:00.088631  208578 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.150
	I0408 19:34:00.088669  208578 kubeadm.go:1160] stopping kube-system containers ...
	I0408 19:34:00.088682  208578 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 19:34:00.088743  208578 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 19:34:00.126373  208578 cri.go:89] found id: ""
	I0408 19:34:00.126455  208578 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 19:34:00.143354  208578 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:34:00.153546  208578 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:34:00.153569  208578 kubeadm.go:157] found existing configuration files:
	
	I0408 19:34:00.153617  208578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:34:00.163240  208578 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:34:00.163299  208578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:34:00.173240  208578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:34:00.183043  208578 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:34:00.183122  208578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:34:00.193089  208578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:34:00.202337  208578 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:34:00.202427  208578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:34:00.211522  208578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:34:00.221218  208578 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:34:00.221298  208578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:34:00.231309  208578 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 19:34:00.244340  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:00.384842  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:01.398082  208578 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.013202999s)
	I0408 19:34:01.398108  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:01.602105  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:01.682117  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:01.768287  208578 api_server.go:52] waiting for apiserver process to appear ...
	I0408 19:34:01.768387  208578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:34:02.268726  208578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:34:02.769354  208578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:34:02.787626  208578 api_server.go:72] duration metric: took 1.019343648s to wait for apiserver process to appear ...
	I0408 19:34:02.787664  208578 api_server.go:88] waiting for apiserver healthz status ...
	I0408 19:34:02.787689  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:06.115821  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 19:34:06.115871  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 19:34:06.115897  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:06.124468  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 19:34:06.124505  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 19:34:06.287840  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:06.293980  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 19:34:06.294009  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 19:34:06.788746  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:06.794938  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 19:34:06.794977  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 19:34:07.288758  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:07.295612  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 19:34:07.295653  208578 api_server.go:103] status: https://192.168.61.150:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 19:34:07.788430  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:07.793912  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0408 19:34:07.800651  208578 api_server.go:141] control plane version: v1.32.2
	I0408 19:34:07.800686  208578 api_server.go:131] duration metric: took 5.013015214s to wait for apiserver health ...
	I0408 19:34:07.800700  208578 cni.go:84] Creating CNI manager for ""
	I0408 19:34:07.800707  208578 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 19:34:07.803044  208578 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 19:34:07.804846  208578 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 19:34:07.818973  208578 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 19:34:07.841790  208578 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 19:34:07.847476  208578 system_pods.go:59] 8 kube-system pods found
	I0408 19:34:07.847517  208578 system_pods.go:61] "coredns-668d6bf9bc-7m76j" [524b8395-bc0c-4352-924b-0c167d811679] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 19:34:07.847525  208578 system_pods.go:61] "etcd-newest-cni-574058" [d8e462e3-9275-4142-afd6-985cae85ac27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 19:34:07.847547  208578 system_pods.go:61] "kube-apiserver-newest-cni-574058" [4a5eb689-2586-426b-b57f-d454a77b92b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 19:34:07.847555  208578 system_pods.go:61] "kube-controller-manager-newest-cni-574058" [85b42f9e-9ee0-44a0-88e5-b980325c56a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 19:34:07.847561  208578 system_pods.go:61] "kube-proxy-b8nhw" [bd184c46-712e-4de3-b2f0-90fc6ec055eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0408 19:34:07.847598  208578 system_pods.go:61] "kube-scheduler-newest-cni-574058" [9c61f50a-1afb-4404-970a-7c7329499058] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 19:34:07.847609  208578 system_pods.go:61] "metrics-server-f79f97bbb-krkdh" [8436d350-8ad0-4106-ba05-656a70cd1bd9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 19:34:07.847615  208578 system_pods.go:61] "storage-provisioner" [6e4061cb-7ed5-4be3-8a67-d3d60476573a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0408 19:34:07.847622  208578 system_pods.go:74] duration metric: took 5.804908ms to wait for pod list to return data ...
	I0408 19:34:07.847633  208578 node_conditions.go:102] verifying NodePressure condition ...
	I0408 19:34:07.860421  208578 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 19:34:07.860460  208578 node_conditions.go:123] node cpu capacity is 2
	I0408 19:34:07.860474  208578 node_conditions.go:105] duration metric: took 12.836545ms to run NodePressure ...
	I0408 19:34:07.860496  208578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 19:34:08.167428  208578 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 19:34:08.179787  208578 ops.go:34] apiserver oom_adj: -16
	I0408 19:34:08.179815  208578 kubeadm.go:597] duration metric: took 8.114247325s to restartPrimaryControlPlane
	I0408 19:34:08.179826  208578 kubeadm.go:394] duration metric: took 8.162197731s to StartCluster
	I0408 19:34:08.179854  208578 settings.go:142] acquiring lock: {Name:mk8d530f6b8ad949177759460b330a3d74710125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:34:08.180042  208578 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:34:08.181338  208578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/kubeconfig: {Name:mk9a380edcf1115627e95ec52acade4ebe48201c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 19:34:08.181671  208578 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.150 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 19:34:08.181921  208578 config.go:182] Loaded profile config "newest-cni-574058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:34:08.181826  208578 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0408 19:34:08.182006  208578 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-574058"
	I0408 19:34:08.182028  208578 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-574058"
	W0408 19:34:08.182038  208578 addons.go:247] addon storage-provisioner should already be in state true
	I0408 19:34:08.182051  208578 addons.go:69] Setting default-storageclass=true in profile "newest-cni-574058"
	I0408 19:34:08.182074  208578 host.go:66] Checking if "newest-cni-574058" exists ...
	I0408 19:34:08.182083  208578 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-574058"
	I0408 19:34:08.182088  208578 addons.go:69] Setting dashboard=true in profile "newest-cni-574058"
	I0408 19:34:08.182105  208578 addons.go:238] Setting addon dashboard=true in "newest-cni-574058"
	W0408 19:34:08.182113  208578 addons.go:247] addon dashboard should already be in state true
	I0408 19:34:08.182131  208578 addons.go:69] Setting metrics-server=true in profile "newest-cni-574058"
	I0408 19:34:08.182169  208578 addons.go:238] Setting addon metrics-server=true in "newest-cni-574058"
	W0408 19:34:08.182186  208578 addons.go:247] addon metrics-server should already be in state true
	I0408 19:34:08.182145  208578 host.go:66] Checking if "newest-cni-574058" exists ...
	I0408 19:34:08.182385  208578 host.go:66] Checking if "newest-cni-574058" exists ...
	I0408 19:34:08.182625  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.182635  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.182780  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.182808  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.182809  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.182856  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.182860  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.182894  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.183614  208578 out.go:177] * Verifying Kubernetes components...
	I0408 19:34:08.185136  208578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 19:34:08.205252  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39427
	I0408 19:34:08.205269  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34243
	I0408 19:34:08.205250  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0408 19:34:08.205782  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.205862  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.205880  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.206304  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.206326  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.206463  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.206480  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.206521  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.206542  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.206772  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.206876  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.206951  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.207122  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.207475  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.207522  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.207530  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41373
	I0408 19:34:08.207780  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.207836  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.207946  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.208393  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.208416  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.208853  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.209417  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.209467  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.210319  208578 addons.go:238] Setting addon default-storageclass=true in "newest-cni-574058"
	W0408 19:34:08.210343  208578 addons.go:247] addon default-storageclass should already be in state true
	I0408 19:34:08.210375  208578 host.go:66] Checking if "newest-cni-574058" exists ...
	I0408 19:34:08.210704  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.210751  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.225440  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0408 19:34:08.225710  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34377
	I0408 19:34:08.228520  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0408 19:34:08.230369  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.230448  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.230873  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.230895  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.231050  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.231066  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.231117  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.231304  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.231477  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.231501  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.231615  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.231634  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.231727  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.232131  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.232342  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.233628  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:34:08.234100  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:34:08.234470  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:34:08.236144  208578 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 19:34:08.236161  208578 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 19:34:08.236145  208578 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0408 19:34:08.237353  208578 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 19:34:08.237375  208578 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 19:34:08.237433  208578 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:34:08.237448  208578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 19:34:08.237400  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:34:08.237474  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:34:08.238698  208578 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0408 19:34:08.240058  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0408 19:34:08.240080  208578 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0408 19:34:08.240105  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:34:08.241332  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.241339  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.241518  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:34:08.241547  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.241759  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:34:08.241898  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:34:08.241919  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.241954  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:34:08.242181  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:34:08.242231  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:34:08.242347  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:34:08.242391  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:34:08.242512  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:34:08.242521  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:34:08.243247  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.243599  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:34:08.243625  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.243791  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:34:08.243950  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:34:08.244122  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:34:08.244231  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:34:08.254405  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0408 19:34:08.254920  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.255483  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.255513  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.255922  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.256515  208578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:34:08.256572  208578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:34:08.273680  208578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35185
	I0408 19:34:08.274259  208578 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:34:08.274762  208578 main.go:141] libmachine: Using API Version  1
	I0408 19:34:08.274785  208578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:34:08.275206  208578 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:34:08.275446  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetState
	I0408 19:34:08.277473  208578 main.go:141] libmachine: (newest-cni-574058) Calling .DriverName
	I0408 19:34:08.277707  208578 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 19:34:08.277720  208578 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 19:34:08.277738  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHHostname
	I0408 19:34:08.281550  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.282023  208578 main.go:141] libmachine: (newest-cni-574058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:1d:f3", ip: ""} in network mk-newest-cni-574058: {Iface:virbr3 ExpiryTime:2025-04-08 20:33:45 +0000 UTC Type:0 Mac:52:54:00:60:1d:f3 Iaid: IPaddr:192.168.61.150 Prefix:24 Hostname:newest-cni-574058 Clientid:01:52:54:00:60:1d:f3}
	I0408 19:34:08.282070  208578 main.go:141] libmachine: (newest-cni-574058) DBG | domain newest-cni-574058 has defined IP address 192.168.61.150 and MAC address 52:54:00:60:1d:f3 in network mk-newest-cni-574058
	I0408 19:34:08.282405  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHPort
	I0408 19:34:08.282639  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHKeyPath
	I0408 19:34:08.282811  208578 main.go:141] libmachine: (newest-cni-574058) Calling .GetSSHUsername
	I0408 19:34:08.282957  208578 sshutil.go:53] new ssh client: &{IP:192.168.61.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/newest-cni-574058/id_rsa Username:docker}
	I0408 19:34:08.427224  208578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 19:34:08.443994  208578 api_server.go:52] waiting for apiserver process to appear ...
	I0408 19:34:08.444087  208578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 19:34:08.464573  208578 api_server.go:72] duration metric: took 282.851736ms to wait for apiserver process to appear ...
	I0408 19:34:08.464606  208578 api_server.go:88] waiting for apiserver healthz status ...
	I0408 19:34:08.464631  208578 api_server.go:253] Checking apiserver healthz at https://192.168.61.150:8443/healthz ...
	I0408 19:34:08.471670  208578 api_server.go:279] https://192.168.61.150:8443/healthz returned 200:
	ok
	I0408 19:34:08.473124  208578 api_server.go:141] control plane version: v1.32.2
	I0408 19:34:08.473152  208578 api_server.go:131] duration metric: took 8.53801ms to wait for apiserver health ...
	I0408 19:34:08.473161  208578 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 19:34:08.480501  208578 system_pods.go:59] 8 kube-system pods found
	I0408 19:34:08.480533  208578 system_pods.go:61] "coredns-668d6bf9bc-7m76j" [524b8395-bc0c-4352-924b-0c167d811679] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 19:34:08.480541  208578 system_pods.go:61] "etcd-newest-cni-574058" [d8e462e3-9275-4142-afd6-985cae85ac27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 19:34:08.480551  208578 system_pods.go:61] "kube-apiserver-newest-cni-574058" [4a5eb689-2586-426b-b57f-d454a77b92b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 19:34:08.480559  208578 system_pods.go:61] "kube-controller-manager-newest-cni-574058" [85b42f9e-9ee0-44a0-88e5-b980325c56a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 19:34:08.480565  208578 system_pods.go:61] "kube-proxy-b8nhw" [bd184c46-712e-4de3-b2f0-90fc6ec055eb] Running
	I0408 19:34:08.480573  208578 system_pods.go:61] "kube-scheduler-newest-cni-574058" [9c61f50a-1afb-4404-970a-7c7329499058] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 19:34:08.480583  208578 system_pods.go:61] "metrics-server-f79f97bbb-krkdh" [8436d350-8ad0-4106-ba05-656a70cd1bd9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 19:34:08.480589  208578 system_pods.go:61] "storage-provisioner" [6e4061cb-7ed5-4be3-8a67-d3d60476573a] Running
	I0408 19:34:08.480619  208578 system_pods.go:74] duration metric: took 7.451617ms to wait for pod list to return data ...
	I0408 19:34:08.480627  208578 default_sa.go:34] waiting for default service account to be created ...
	I0408 19:34:08.484250  208578 default_sa.go:45] found service account: "default"
	I0408 19:34:08.484277  208578 default_sa.go:55] duration metric: took 3.643294ms for default service account to be created ...
	I0408 19:34:08.484293  208578 kubeadm.go:582] duration metric: took 302.580864ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 19:34:08.484317  208578 node_conditions.go:102] verifying NodePressure condition ...
	I0408 19:34:08.487398  208578 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 19:34:08.487426  208578 node_conditions.go:123] node cpu capacity is 2
	I0408 19:34:08.487441  208578 node_conditions.go:105] duration metric: took 3.118357ms to run NodePressure ...
	I0408 19:34:08.487461  208578 start.go:241] waiting for startup goroutines ...
	I0408 19:34:08.536933  208578 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 19:34:08.536957  208578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 19:34:08.539452  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0408 19:34:08.539479  208578 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0408 19:34:08.557315  208578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 19:34:08.578553  208578 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 19:34:08.578580  208578 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 19:34:08.583900  208578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 19:34:08.606686  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0408 19:34:08.606717  208578 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0408 19:34:08.645882  208578 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 19:34:08.645916  208578 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 19:34:08.656641  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0408 19:34:08.656676  208578 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0408 19:34:08.699927  208578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 19:34:08.706202  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0408 19:34:08.706227  208578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0408 19:34:08.775120  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0408 19:34:08.775154  208578 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0408 19:34:08.889009  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0408 19:34:08.889058  208578 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0408 19:34:08.981237  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0408 19:34:08.981269  208578 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0408 19:34:09.040922  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0408 19:34:09.040954  208578 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0408 19:34:09.064862  208578 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0408 19:34:09.064889  208578 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0408 19:34:09.141240  208578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0408 19:34:10.275126  208578 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.717762904s)
	I0408 19:34:10.275206  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.275219  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.275147  208578 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.691207244s)
	I0408 19:34:10.275285  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.275304  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.275579  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.275630  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.275636  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.275644  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.275649  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.275653  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.275663  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.275675  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.275663  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.275714  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.275933  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.275990  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.276026  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.276110  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.276124  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.282287  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.282320  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.282699  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.282727  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.282737  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.346432  208578 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.646437158s)
	I0408 19:34:10.346500  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.346513  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.346895  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.346916  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.346927  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.346936  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.346954  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.347193  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.347211  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.347217  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.347242  208578 addons.go:479] Verifying addon metrics-server=true in "newest-cni-574058"
	I0408 19:34:10.900219  208578 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.758920795s)
	I0408 19:34:10.900351  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.900404  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.900746  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.900793  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.900816  208578 main.go:141] libmachine: Making call to close driver server
	I0408 19:34:10.900830  208578 main.go:141] libmachine: (newest-cni-574058) Calling .Close
	I0408 19:34:10.901113  208578 main.go:141] libmachine: Successfully made call to close driver server
	I0408 19:34:10.901156  208578 main.go:141] libmachine: (newest-cni-574058) DBG | Closing plugin on server side
	I0408 19:34:10.901166  208578 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 19:34:10.903191  208578 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-574058 addons enable metrics-server
	
	I0408 19:34:10.905113  208578 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0408 19:34:10.906865  208578 addons.go:514] duration metric: took 2.725052548s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0408 19:34:10.906918  208578 start.go:246] waiting for cluster config update ...
	I0408 19:34:10.906936  208578 start.go:255] writing updated cluster config ...
	I0408 19:34:10.907298  208578 ssh_runner.go:195] Run: rm -f paused
	I0408 19:34:10.967232  208578 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0408 19:34:10.969649  208578 out.go:177] * Done! kubectl is now configured to use "newest-cni-574058" cluster and "default" namespace by default
	I0408 19:34:11.443529  205913 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0408 19:34:11.443989  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:11.444237  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:34:16.444610  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:16.444853  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:34:26.445048  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:26.445308  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:34:46.445770  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:34:46.446104  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:35:26.447251  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:35:26.447505  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:35:26.447529  205913 kubeadm.go:310] 
	I0408 19:35:26.447585  205913 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0408 19:35:26.447662  205913 kubeadm.go:310] 		timed out waiting for the condition
	I0408 19:35:26.447677  205913 kubeadm.go:310] 
	I0408 19:35:26.447726  205913 kubeadm.go:310] 	This error is likely caused by:
	I0408 19:35:26.447781  205913 kubeadm.go:310] 		- The kubelet is not running
	I0408 19:35:26.447887  205913 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 19:35:26.447894  205913 kubeadm.go:310] 
	I0408 19:35:26.448020  205913 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 19:35:26.448076  205913 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0408 19:35:26.448126  205913 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0408 19:35:26.448136  205913 kubeadm.go:310] 
	I0408 19:35:26.448267  205913 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 19:35:26.448411  205913 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 19:35:26.448474  205913 kubeadm.go:310] 
	I0408 19:35:26.448621  205913 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 19:35:26.448774  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 19:35:26.448915  205913 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0408 19:35:26.449049  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 19:35:26.449115  205913 kubeadm.go:310] 
	I0408 19:35:26.449270  205913 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:35:26.449395  205913 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 19:35:26.449512  205913 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0408 19:35:26.449660  205913 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 19:35:26.449711  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 19:35:26.891169  205913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 19:35:26.904909  205913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 19:35:26.914475  205913 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 19:35:26.914502  205913 kubeadm.go:157] found existing configuration files:
	
	I0408 19:35:26.914553  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 19:35:26.924306  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 19:35:26.924374  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 19:35:26.934487  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 19:35:26.944461  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 19:35:26.944529  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 19:35:26.954995  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 19:35:26.964855  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 19:35:26.964941  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 19:35:26.975439  205913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 19:35:26.985173  205913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 19:35:26.985239  205913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 19:35:26.995433  205913 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 19:35:27.204002  205913 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 19:37:22.974768  205913 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 19:37:22.974883  205913 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0408 19:37:22.976335  205913 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0408 19:37:22.976383  205913 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 19:37:22.976466  205913 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 19:37:22.976595  205913 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 19:37:22.976752  205913 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 19:37:22.976829  205913 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 19:37:22.979175  205913 out.go:235]   - Generating certificates and keys ...
	I0408 19:37:22.979274  205913 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 19:37:22.979335  205913 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 19:37:22.979409  205913 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 19:37:22.979461  205913 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0408 19:37:22.979537  205913 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 19:37:22.979599  205913 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0408 19:37:22.979653  205913 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0408 19:37:22.979723  205913 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0408 19:37:22.979801  205913 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 19:37:22.979874  205913 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 19:37:22.979909  205913 kubeadm.go:310] [certs] Using the existing "sa" key
	I0408 19:37:22.979973  205913 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 19:37:22.980044  205913 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 19:37:22.980118  205913 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 19:37:22.980189  205913 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 19:37:22.980236  205913 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 19:37:22.980358  205913 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 19:37:22.980475  205913 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 19:37:22.980538  205913 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 19:37:22.980630  205913 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 19:37:22.982169  205913 out.go:235]   - Booting up control plane ...
	I0408 19:37:22.982280  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 19:37:22.982367  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 19:37:22.982450  205913 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 19:37:22.982565  205913 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 19:37:22.982720  205913 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 19:37:22.982764  205913 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0408 19:37:22.982823  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.982981  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983043  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.983218  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983314  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.983505  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983589  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.983784  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.983874  205913 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 19:37:22.984082  205913 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 19:37:22.984105  205913 kubeadm.go:310] 
	I0408 19:37:22.984143  205913 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0408 19:37:22.984179  205913 kubeadm.go:310] 		timed out waiting for the condition
	I0408 19:37:22.984185  205913 kubeadm.go:310] 
	I0408 19:37:22.984216  205913 kubeadm.go:310] 	This error is likely caused by:
	I0408 19:37:22.984247  205913 kubeadm.go:310] 		- The kubelet is not running
	I0408 19:37:22.984339  205913 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 19:37:22.984346  205913 kubeadm.go:310] 
	I0408 19:37:22.984449  205913 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 19:37:22.984495  205913 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0408 19:37:22.984524  205913 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0408 19:37:22.984531  205913 kubeadm.go:310] 
	I0408 19:37:22.984627  205913 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 19:37:22.984699  205913 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 19:37:22.984706  205913 kubeadm.go:310] 
	I0408 19:37:22.984805  205913 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 19:37:22.984952  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 19:37:22.985064  205913 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0408 19:37:22.985134  205913 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 19:37:22.985199  205913 kubeadm.go:310] 
	I0408 19:37:22.985210  205913 kubeadm.go:394] duration metric: took 7m56.100848189s to StartCluster
	I0408 19:37:22.985262  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 19:37:22.985318  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 19:37:23.020922  205913 cri.go:89] found id: ""
	I0408 19:37:23.020963  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.020980  205913 logs.go:284] No container was found matching "kube-apiserver"
	I0408 19:37:23.020989  205913 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 19:37:23.021057  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 19:37:23.053119  205913 cri.go:89] found id: ""
	I0408 19:37:23.053155  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.053168  205913 logs.go:284] No container was found matching "etcd"
	I0408 19:37:23.053179  205913 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 19:37:23.053251  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 19:37:23.085925  205913 cri.go:89] found id: ""
	I0408 19:37:23.085959  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.085968  205913 logs.go:284] No container was found matching "coredns"
	I0408 19:37:23.085976  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 19:37:23.086026  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 19:37:23.119428  205913 cri.go:89] found id: ""
	I0408 19:37:23.119460  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.119472  205913 logs.go:284] No container was found matching "kube-scheduler"
	I0408 19:37:23.119482  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 19:37:23.119555  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 19:37:23.152519  205913 cri.go:89] found id: ""
	I0408 19:37:23.152548  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.152556  205913 logs.go:284] No container was found matching "kube-proxy"
	I0408 19:37:23.152563  205913 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 19:37:23.152616  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 19:37:23.185610  205913 cri.go:89] found id: ""
	I0408 19:37:23.185653  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.185660  205913 logs.go:284] No container was found matching "kube-controller-manager"
	I0408 19:37:23.185667  205913 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 19:37:23.185722  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 19:37:23.220368  205913 cri.go:89] found id: ""
	I0408 19:37:23.220396  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.220404  205913 logs.go:284] No container was found matching "kindnet"
	I0408 19:37:23.220411  205913 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 19:37:23.220465  205913 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 19:37:23.253979  205913 cri.go:89] found id: ""
	I0408 19:37:23.254016  205913 logs.go:282] 0 containers: []
	W0408 19:37:23.254029  205913 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0408 19:37:23.254044  205913 logs.go:123] Gathering logs for kubelet ...
	I0408 19:37:23.254061  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 19:37:23.304529  205913 logs.go:123] Gathering logs for dmesg ...
	I0408 19:37:23.304574  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 19:37:23.318406  205913 logs.go:123] Gathering logs for describe nodes ...
	I0408 19:37:23.318443  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 19:37:23.393733  205913 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 19:37:23.393774  205913 logs.go:123] Gathering logs for CRI-O ...
	I0408 19:37:23.393795  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 19:37:23.495288  205913 logs.go:123] Gathering logs for container status ...
	I0408 19:37:23.495333  205913 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0408 19:37:23.534511  205913 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 19:37:23.534568  205913 out.go:270] * 
	W0408 19:37:23.534629  205913 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 19:37:23.534643  205913 out.go:270] * 
	W0408 19:37:23.535480  205913 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 19:37:23.539860  205913 out.go:201] 
	W0408 19:37:23.541197  205913 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 19:37:23.541240  205913 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 19:37:23.541256  205913 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 19:37:23.542872  205913 out.go:201] 
	
	
	==> CRI-O <==
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.318324095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744141975318299464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39f50679-afe5-46ed-b9b7-831d95bca15e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.318970065Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29e7ffaa-bdb2-4657-a60d-9b961eff5673 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.319022115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29e7ffaa-bdb2-4657-a60d-9b961eff5673 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.319056754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=29e7ffaa-bdb2-4657-a60d-9b961eff5673 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.349354404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d84319e-b716-461f-9240-61abff141e98 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.349428963Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d84319e-b716-461f-9240-61abff141e98 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.350462603Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62f2293a-7f01-46cb-a3d6-b15ec1d30d3d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.350868774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744141975350841014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62f2293a-7f01-46cb-a3d6-b15ec1d30d3d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.351401934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=651bab83-17f6-44aa-aa29-5307c080b862 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.351456303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=651bab83-17f6-44aa-aa29-5307c080b862 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.351491831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=651bab83-17f6-44aa-aa29-5307c080b862 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.382200167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d4f9c4b-ba9b-4fa9-a5a9-8c284cc1ec70 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.382270769Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d4f9c4b-ba9b-4fa9-a5a9-8c284cc1ec70 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.383421671Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0505fe97-b0ef-4bd6-8475-e7619ee196b8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.383836053Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744141975383795373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0505fe97-b0ef-4bd6-8475-e7619ee196b8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.384569581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0edfef43-1c37-4275-b108-e6b0c4d54639 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.384645633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0edfef43-1c37-4275-b108-e6b0c4d54639 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.384686515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0edfef43-1c37-4275-b108-e6b0c4d54639 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.416874002Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=23fb2c47-1734-4a8d-b280-cfd81adbcea7 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.417014762Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=23fb2c47-1734-4a8d-b280-cfd81adbcea7 name=/runtime.v1.RuntimeService/Version
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.418382670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc1ecfe6-5a84-4471-9491-58830a58deb2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.418818705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744141975418789930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc1ecfe6-5a84-4471-9491-58830a58deb2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.419607636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09de1337-ae37-4482-be95-26296b2027f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.419688211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09de1337-ae37-4482-be95-26296b2027f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 19:52:55 old-k8s-version-257500 crio[629]: time="2025-04-08 19:52:55.419729159Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=09de1337-ae37-4482-be95-26296b2027f5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 8 19:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049597] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039830] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.124668] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.083532] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.625489] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.166101] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.061460] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064691] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.194145] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.127525] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.273689] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +7.301504] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.058099] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.817423] systemd-fstab-generator[1002]: Ignoring "noauto" option for root device
	[ +11.268261] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 8 19:33] systemd-fstab-generator[4958]: Ignoring "noauto" option for root device
	[Apr 8 19:35] systemd-fstab-generator[5233]: Ignoring "noauto" option for root device
	[  +0.061975] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:52:55 up 23 min,  0 users,  load average: 0.01, 0.03, 0.00
	Linux old-k8s-version-257500 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b93dc0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000dfeae0, 0x24, 0x1000000000060, 0x7fef39aaab68, 0x118, ...)
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]: net/http.(*Transport).dial(0xc0004c5540, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000dfeae0, 0x24, 0x0, 0x0, 0x4f0b860, ...)
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 08 19:52:54 old-k8s-version-257500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]: net/http.(*Transport).dialConn(0xc0004c5540, 0x4f7fe00, 0xc000120018, 0x0, 0xc000354600, 0x5, 0xc000dfeae0, 0x24, 0x0, 0xc000173200, ...)
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]: net/http.(*Transport).dialConnFor(0xc0004c5540, 0xc00024e160)
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]: created by net/http.(*Transport).queueForDial
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]: goroutine 159 [select]:
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000e0ed20, 0xc0000bbc00, 0xc0000e0de0, 0xc0000e0d80)
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]: created by net.(*netFD).connect
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]: goroutine 158 [select]:
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000e0e840, 0xc0000bb800, 0xc0000e0d20, 0xc0000e0c60)
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]: created by net.(*netFD).connect
	Apr 08 19:52:54 old-k8s-version-257500 kubelet[7126]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Apr 08 19:52:55 old-k8s-version-257500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 182.
	Apr 08 19:52:55 old-k8s-version-257500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 08 19:52:55 old-k8s-version-257500 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257500 -n old-k8s-version-257500
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 2 (240.505474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-257500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (389.21s)

                                                
                                    

Test pass (284/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.18
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.16
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.2/json-events 5.7
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.08
18 TestDownloadOnly/v1.32.2/DeleteAll 0.16
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.65
22 TestOffline 89.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 136.51
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 10.54
35 TestAddons/parallel/Registry 17.94
37 TestAddons/parallel/InspektorGadget 11.66
38 TestAddons/parallel/MetricsServer 6.87
40 TestAddons/parallel/CSI 47.86
41 TestAddons/parallel/Headlamp 21.31
42 TestAddons/parallel/CloudSpanner 5.59
43 TestAddons/parallel/LocalPath 58.93
44 TestAddons/parallel/NvidiaDevicePlugin 6.68
45 TestAddons/parallel/Yakd 10.98
47 TestAddons/StoppedEnableDisable 91.34
48 TestCertOptions 61.85
49 TestCertExpiration 287.9
51 TestForceSystemdFlag 105.09
52 TestForceSystemdEnv 65.23
54 TestKVMDriverInstallOrUpdate 4.04
58 TestErrorSpam/setup 41.36
59 TestErrorSpam/start 0.39
60 TestErrorSpam/status 0.81
61 TestErrorSpam/pause 1.71
62 TestErrorSpam/unpause 1.75
63 TestErrorSpam/stop 5.08
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 57.42
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 40.07
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.56
75 TestFunctional/serial/CacheCmd/cache/add_local 2.07
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.75
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 31.86
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.41
86 TestFunctional/serial/LogsFileCmd 1.49
87 TestFunctional/serial/InvalidService 4.45
89 TestFunctional/parallel/ConfigCmd 0.39
90 TestFunctional/parallel/DashboardCmd 13.8
91 TestFunctional/parallel/DryRun 0.3
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 1.02
97 TestFunctional/parallel/ServiceCmdConnect 10.6
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 40.58
101 TestFunctional/parallel/SSHCmd 0.49
102 TestFunctional/parallel/CpCmd 1.47
103 TestFunctional/parallel/MySQL 25.93
104 TestFunctional/parallel/FileSync 0.24
105 TestFunctional/parallel/CertSync 1.5
109 TestFunctional/parallel/NodeLabels 0.32
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
113 TestFunctional/parallel/License 0.32
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
117 TestFunctional/parallel/Version/short 0.06
118 TestFunctional/parallel/Version/components 0.67
119 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.28
125 TestFunctional/parallel/ServiceCmd/List 0.51
126 TestFunctional/parallel/ImageCommands/ImageListShort 1.15
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
130 TestFunctional/parallel/ImageCommands/ImageBuild 4.49
131 TestFunctional/parallel/ImageCommands/Setup 1.65
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
134 TestFunctional/parallel/ServiceCmd/Format 0.35
135 TestFunctional/parallel/ServiceCmd/URL 0.44
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.59
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
144 TestFunctional/parallel/ProfileCmd/profile_list 0.46
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
146 TestFunctional/parallel/MountCmd/any-port 21.78
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.21
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.78
150 TestFunctional/parallel/ImageCommands/ImageRemove 3.65
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 6
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
153 TestFunctional/parallel/MountCmd/specific-port 1.96
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 207.08
163 TestMultiControlPlane/serial/DeployApp 7.68
164 TestMultiControlPlane/serial/PingHostFromPods 1.25
165 TestMultiControlPlane/serial/AddWorkerNode 56.61
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
168 TestMultiControlPlane/serial/CopyFile 13.75
169 TestMultiControlPlane/serial/StopSecondaryNode 91.72
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
171 TestMultiControlPlane/serial/RestartSecondaryNode 45.49
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 469
174 TestMultiControlPlane/serial/DeleteSecondaryNode 18.91
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
176 TestMultiControlPlane/serial/StopCluster 273.2
177 TestMultiControlPlane/serial/RestartCluster 126.12
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
179 TestMultiControlPlane/serial/AddSecondaryNode 76.92
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
184 TestJSONOutput/start/Command 84.38
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.72
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.64
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 7.36
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.22
212 TestMainNoArgs 0.05
213 TestMinikubeProfile 88.74
216 TestMountStart/serial/StartWithMountFirst 24.6
217 TestMountStart/serial/VerifyMountFirst 0.4
218 TestMountStart/serial/StartWithMountSecond 28.34
219 TestMountStart/serial/VerifyMountSecond 0.4
220 TestMountStart/serial/DeleteFirst 0.95
221 TestMountStart/serial/VerifyMountPostDelete 0.41
222 TestMountStart/serial/Stop 1.29
223 TestMountStart/serial/RestartStopped 23.14
224 TestMountStart/serial/VerifyMountPostStop 0.4
227 TestMultiNode/serial/FreshStart2Nodes 113.59
228 TestMultiNode/serial/DeployApp2Nodes 6.58
229 TestMultiNode/serial/PingHostFrom2Pods 0.82
230 TestMultiNode/serial/AddNode 47.96
231 TestMultiNode/serial/MultiNodeLabels 0.07
232 TestMultiNode/serial/ProfileList 0.64
233 TestMultiNode/serial/CopyFile 7.68
234 TestMultiNode/serial/StopNode 2.32
235 TestMultiNode/serial/StartAfterStop 40.39
236 TestMultiNode/serial/RestartKeepsNodes 344.93
237 TestMultiNode/serial/DeleteNode 2.69
238 TestMultiNode/serial/StopMultiNode 182
239 TestMultiNode/serial/RestartMultiNode 117.72
240 TestMultiNode/serial/ValidateNameConflict 47.26
247 TestScheduledStopUnix 120.98
251 TestRunningBinaryUpgrade 146.85
262 TestNetworkPlugins/group/false 3.61
266 TestStoppedBinaryUpgrade/Setup 0.75
267 TestStoppedBinaryUpgrade/Upgrade 186.05
276 TestPause/serial/Start 79.44
277 TestPause/serial/SecondStartNoReconfiguration 40.56
278 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
281 TestNoKubernetes/serial/StartWithK8s 45.71
282 TestPause/serial/Pause 0.72
283 TestPause/serial/VerifyStatus 0.33
284 TestPause/serial/Unpause 0.7
285 TestPause/serial/PauseAgain 0.87
286 TestPause/serial/DeletePaused 1.12
287 TestPause/serial/VerifyDeletedResources 4.66
288 TestNoKubernetes/serial/StartWithStopK8s 48.15
289 TestNoKubernetes/serial/Start 29.88
290 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
291 TestNoKubernetes/serial/ProfileList 1.1
292 TestNoKubernetes/serial/Stop 1.3
293 TestNoKubernetes/serial/StartNoArgs 62.89
294 TestNetworkPlugins/group/auto/Start 106.94
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
296 TestNetworkPlugins/group/kindnet/Start 98.71
297 TestNetworkPlugins/group/calico/Start 88.15
298 TestNetworkPlugins/group/auto/KubeletFlags 0.23
299 TestNetworkPlugins/group/auto/NetCatPod 15.28
300 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
301 TestNetworkPlugins/group/auto/DNS 0.15
302 TestNetworkPlugins/group/auto/Localhost 0.14
303 TestNetworkPlugins/group/auto/HairPin 0.13
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
305 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
306 TestNetworkPlugins/group/kindnet/DNS 0.19
307 TestNetworkPlugins/group/kindnet/Localhost 0.13
308 TestNetworkPlugins/group/kindnet/HairPin 0.14
309 TestNetworkPlugins/group/custom-flannel/Start 72.92
310 TestNetworkPlugins/group/enable-default-cni/Start 76.77
311 TestNetworkPlugins/group/flannel/Start 112.33
312 TestNetworkPlugins/group/calico/ControllerPod 6.01
313 TestNetworkPlugins/group/calico/KubeletFlags 0.23
314 TestNetworkPlugins/group/calico/NetCatPod 16.27
315 TestNetworkPlugins/group/calico/DNS 0.15
316 TestNetworkPlugins/group/calico/Localhost 0.15
317 TestNetworkPlugins/group/calico/HairPin 0.14
318 TestNetworkPlugins/group/bridge/Start 99.9
319 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
320 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.25
321 TestNetworkPlugins/group/custom-flannel/DNS 0.2
322 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
323 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.33
328 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
329 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
330 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
332 TestStartStop/group/no-preload/serial/FirstStart 84.23
333 TestNetworkPlugins/group/flannel/ControllerPod 6.01
334 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
335 TestNetworkPlugins/group/flannel/NetCatPod 15.29
336 TestNetworkPlugins/group/flannel/DNS 0.15
337 TestNetworkPlugins/group/flannel/Localhost 0.13
338 TestNetworkPlugins/group/flannel/HairPin 0.12
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
340 TestNetworkPlugins/group/bridge/NetCatPod 10.29
342 TestStartStop/group/embed-certs/serial/FirstStart 96.12
343 TestNetworkPlugins/group/bridge/DNS 0.17
344 TestNetworkPlugins/group/bridge/Localhost 0.19
345 TestNetworkPlugins/group/bridge/HairPin 0.15
347 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 93.78
348 TestStartStop/group/no-preload/serial/DeployApp 13.52
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
350 TestStartStop/group/no-preload/serial/Stop 91.07
351 TestStartStop/group/embed-certs/serial/DeployApp 12.3
352 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
353 TestStartStop/group/embed-certs/serial/Stop 90.84
354 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
356 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.08
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
358 TestStartStop/group/no-preload/serial/SecondStart 353.05
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
360 TestStartStop/group/embed-certs/serial/SecondStart 337.43
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 311.65
365 TestStartStop/group/old-k8s-version/serial/Stop 1.39
366 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
370 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
371 TestStartStop/group/no-preload/serial/Pause 2.76
373 TestStartStop/group/newest-cni/serial/FirstStart 53.81
374 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
375 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
376 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.75
379 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
380 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
381 TestStartStop/group/embed-certs/serial/Pause 2.94
382 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
384 TestStartStop/group/newest-cni/serial/Stop 7.35
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
386 TestStartStop/group/newest-cni/serial/SecondStart 37.16
387 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
390 TestStartStop/group/newest-cni/serial/Pause 2.8
x
+
TestDownloadOnly/v1.20.0/json-events (10.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-264168 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-264168 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.179283303s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0408 18:12:55.246064  148487 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0408 18:12:55.246203  148487 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-264168
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-264168: exit status 85 (75.25721ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-264168 | jenkins | v1.35.0 | 08 Apr 25 18:12 UTC |          |
	|         | -p download-only-264168        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 18:12:45
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 18:12:45.114320  148499 out.go:345] Setting OutFile to fd 1 ...
	I0408 18:12:45.114467  148499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:12:45.114480  148499 out.go:358] Setting ErrFile to fd 2...
	I0408 18:12:45.114488  148499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:12:45.114705  148499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	W0408 18:12:45.114863  148499 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20604-141129/.minikube/config/config.json: open /home/jenkins/minikube-integration/20604-141129/.minikube/config/config.json: no such file or directory
	I0408 18:12:45.115541  148499 out.go:352] Setting JSON to true
	I0408 18:12:45.117276  148499 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6910,"bootTime":1744129055,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 18:12:45.117429  148499 start.go:139] virtualization: kvm guest
	I0408 18:12:45.120121  148499 out.go:97] [download-only-264168] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0408 18:12:45.120410  148499 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball: no such file or directory
	I0408 18:12:45.120471  148499 notify.go:220] Checking for updates...
	I0408 18:12:45.122339  148499 out.go:169] MINIKUBE_LOCATION=20604
	I0408 18:12:45.124383  148499 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:12:45.126567  148499 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 18:12:45.128404  148499 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 18:12:45.130175  148499 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0408 18:12:45.133561  148499 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 18:12:45.134026  148499 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 18:12:45.261710  148499 out.go:97] Using the kvm2 driver based on user configuration
	I0408 18:12:45.261817  148499 start.go:297] selected driver: kvm2
	I0408 18:12:45.261881  148499 start.go:901] validating driver "kvm2" against <nil>
	I0408 18:12:45.262565  148499 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 18:12:45.263752  148499 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20604-141129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 18:12:45.285187  148499 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 18:12:45.285264  148499 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 18:12:45.286012  148499 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0408 18:12:45.286207  148499 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 18:12:45.286245  148499 cni.go:84] Creating CNI manager for ""
	I0408 18:12:45.286290  148499 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 18:12:45.286299  148499 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 18:12:45.286351  148499 start.go:340] cluster config:
	{Name:download-only-264168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-264168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:12:45.286559  148499 iso.go:125] acquiring lock: {Name:mk6f89956dcd0ccd06b3c273592988c0e077c69a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 18:12:45.289164  148499 out.go:97] Downloading VM boot image ...
	I0408 18:12:45.289312  148499 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20604-141129/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0408 18:12:49.848922  148499 out.go:97] Starting "download-only-264168" primary control-plane node in "download-only-264168" cluster
	I0408 18:12:49.848956  148499 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 18:12:49.871160  148499 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 18:12:49.871204  148499 cache.go:56] Caching tarball of preloaded images
	I0408 18:12:49.871360  148499 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 18:12:49.873439  148499 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0408 18:12:49.873469  148499 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0408 18:12:49.903686  148499 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 18:12:53.625028  148499 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0408 18:12:53.625125  148499 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0408 18:12:54.552798  148499 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 18:12:54.553134  148499 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/download-only-264168/config.json ...
	I0408 18:12:54.553166  148499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/download-only-264168/config.json: {Name:mk38c9e62850c0371ada1fa286f8ea26d78b2a78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:12:54.553333  148499 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 18:12:54.553522  148499 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20604-141129/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-264168 host does not exist
	  To start a cluster, run: "minikube start -p download-only-264168"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-264168
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (5.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-115248 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-115248 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.700495484s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (5.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0408 18:13:01.322955  148487 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0408 18:13:01.323005  148487 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-115248
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-115248: exit status 85 (75.558099ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-264168 | jenkins | v1.35.0 | 08 Apr 25 18:12 UTC |                     |
	|         | -p download-only-264168        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 08 Apr 25 18:12 UTC | 08 Apr 25 18:12 UTC |
	| delete  | -p download-only-264168        | download-only-264168 | jenkins | v1.35.0 | 08 Apr 25 18:12 UTC | 08 Apr 25 18:12 UTC |
	| start   | -o=json --download-only        | download-only-115248 | jenkins | v1.35.0 | 08 Apr 25 18:12 UTC |                     |
	|         | -p download-only-115248        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 18:12:55
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 18:12:55.668848  148705 out.go:345] Setting OutFile to fd 1 ...
	I0408 18:12:55.669132  148705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:12:55.669142  148705 out.go:358] Setting ErrFile to fd 2...
	I0408 18:12:55.669147  148705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:12:55.669366  148705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 18:12:55.670253  148705 out.go:352] Setting JSON to true
	I0408 18:12:55.671342  148705 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6921,"bootTime":1744129055,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 18:12:55.671424  148705 start.go:139] virtualization: kvm guest
	I0408 18:12:55.673990  148705 out.go:97] [download-only-115248] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 18:12:55.674206  148705 notify.go:220] Checking for updates...
	I0408 18:12:55.675961  148705 out.go:169] MINIKUBE_LOCATION=20604
	I0408 18:12:55.677685  148705 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:12:55.679501  148705 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 18:12:55.681795  148705 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 18:12:55.683613  148705 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0408 18:12:55.686855  148705 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 18:12:55.687125  148705 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 18:12:55.722471  148705 out.go:97] Using the kvm2 driver based on user configuration
	I0408 18:12:55.722515  148705 start.go:297] selected driver: kvm2
	I0408 18:12:55.722526  148705 start.go:901] validating driver "kvm2" against <nil>
	I0408 18:12:55.722910  148705 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 18:12:55.723029  148705 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20604-141129/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 18:12:55.740001  148705 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0408 18:12:55.740073  148705 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 18:12:55.740851  148705 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0408 18:12:55.741058  148705 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 18:12:55.741102  148705 cni.go:84] Creating CNI manager for ""
	I0408 18:12:55.741149  148705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 18:12:55.741164  148705 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 18:12:55.741240  148705 start.go:340] cluster config:
	{Name:download-only-115248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-115248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:12:55.741364  148705 iso.go:125] acquiring lock: {Name:mk6f89956dcd0ccd06b3c273592988c0e077c69a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 18:12:55.743561  148705 out.go:97] Starting "download-only-115248" primary control-plane node in "download-only-115248" cluster
	I0408 18:12:55.743587  148705 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 18:12:55.819242  148705 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 18:12:55.819284  148705 cache.go:56] Caching tarball of preloaded images
	I0408 18:12:55.819454  148705 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 18:12:55.821551  148705 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0408 18:12:55.821587  148705 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0408 18:12:55.845430  148705 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:a1ce605168a895ad5f3b3c8db1fe4d66 -> /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0408 18:12:59.807987  148705 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0408 18:12:59.808084  148705 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20604-141129/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0408 18:13:00.575016  148705 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0408 18:13:00.575351  148705 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/download-only-115248/config.json ...
	I0408 18:13:00.575381  148705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/download-only-115248/config.json: {Name:mk161154f0307b79fe2e8f35f9f1ce21eafeab5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 18:13:00.576317  148705 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0408 18:13:00.576506  148705 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20604-141129/.minikube/cache/linux/amd64/v1.32.2/kubectl
	
	
	* The control-plane node download-only-115248 host does not exist
	  To start a cluster, run: "minikube start -p download-only-115248"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-115248
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I0408 18:13:01.999295  148487 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-788533 --alsologtostderr --binary-mirror http://127.0.0.1:33699 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-788533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-788533
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (89.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-913064 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-913064 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.275183171s)
helpers_test.go:175: Cleaning up "offline-crio-913064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-913064
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-913064: (1.147912305s)
--- PASS: TestOffline (89.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-835623
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-835623: exit status 85 (60.067135ms)

                                                
                                                
-- stdout --
	* Profile "addons-835623" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-835623"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-835623
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-835623: exit status 85 (59.494874ms)

                                                
                                                
-- stdout --
	* Profile "addons-835623" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-835623"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (136.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-835623 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-835623 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m16.514135879s)
--- PASS: TestAddons/Setup (136.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-835623 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-835623 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-835623 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-835623 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [afbcf261-6a00-410f-8bfc-9d762c6d3c14] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [afbcf261-6a00-410f-8bfc-9d762c6d3c14] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004399253s
addons_test.go:633: (dbg) Run:  kubectl --context addons-835623 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-835623 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-835623 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.969687ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-bq7bj" [3695ddb5-a636-4113-96e4-dedadf9b27e0] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004753297s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rtchc" [642d32b3-1efb-4e55-8406-232124327998] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003150204s
addons_test.go:331: (dbg) Run:  kubectl --context addons-835623 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-835623 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-835623 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.00851627s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 ip
2025/04/08 18:15:55 [DEBUG] GET http://192.168.39.89:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.66s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-t867t" [e467d03d-e44f-4221-82ec-c60a3050b3b8] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.189958548s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-835623 addons disable inspektor-gadget --alsologtostderr -v=1: (6.466378844s)
--- PASS: TestAddons/parallel/InspektorGadget (11.66s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.977831ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-hw9g7" [dbdceedc-2077-4048-80db-7cb73c65d3d4] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003342718s
addons_test.go:402: (dbg) Run:  kubectl --context addons-835623 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0408 18:15:56.636997  148487 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0408 18:15:56.641628  148487 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0408 18:15:56.641661  148487 kapi.go:107] duration metric: took 4.681638ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.693534ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-835623 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-835623 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [862e26eb-aaed-4e07-830b-4a8bbad0ca6b] Pending
helpers_test.go:344: "task-pv-pod" [862e26eb-aaed-4e07-830b-4a8bbad0ca6b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [862e26eb-aaed-4e07-830b-4a8bbad0ca6b] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004946029s
addons_test.go:511: (dbg) Run:  kubectl --context addons-835623 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-835623 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-835623 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-835623 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-835623 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-835623 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-835623 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [00a44f07-c161-4e70-bc20-d3c8230b96e5] Pending
helpers_test.go:344: "task-pv-pod-restore" [00a44f07-c161-4e70-bc20-d3c8230b96e5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [00a44f07-c161-4e70-bc20-d3c8230b96e5] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00415877s
addons_test.go:553: (dbg) Run:  kubectl --context addons-835623 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-835623 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-835623 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-835623 addons disable volumesnapshots --alsologtostderr -v=1: (1.11723826s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-835623 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.961289788s)
--- PASS: TestAddons/parallel/CSI (47.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-835623 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-835623 --alsologtostderr -v=1: (1.138181793s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-smnjn" [c0f05bff-1f1c-4f4f-a6e9-c9dfc3ca36c4] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-smnjn" [c0f05bff-1f1c-4f4f-a6e9-c9dfc3ca36c4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-smnjn" [c0f05bff-1f1c-4f4f-a6e9-c9dfc3ca36c4] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004906425s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-835623 addons disable headlamp --alsologtostderr -v=1: (6.162675066s)
--- PASS: TestAddons/parallel/Headlamp (21.31s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-59tdt" [b7857f49-59a9-4892-88ec-a3dd2eb03441] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005086088s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-835623 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-835623 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-835623 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7533c4e8-8f81-492a-95e6-aa30a117d108] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7533c4e8-8f81-492a-95e6-aa30a117d108] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7533c4e8-8f81-492a-95e6-aa30a117d108] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.004053567s
addons_test.go:906: (dbg) Run:  kubectl --context addons-835623 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 ssh "cat /opt/local-path-provisioner/pvc-19f0b297-ccd7-4f7e-8774-5015739a28ea_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-835623 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-835623 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-835623 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.053387071s)
--- PASS: TestAddons/parallel/LocalPath (58.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bz8hp" [aa2741c5-c5d0-499e-b5fd-788420d37b9b] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003485329s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.68s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-p9p7r" [99f59186-fba0-4442-b67a-75a93f5950b7] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004260427s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-835623 addons disable yakd --alsologtostderr -v=1: (5.979303158s)
--- PASS: TestAddons/parallel/Yakd (10.98s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-835623
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-835623: (1m31.019939016s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-835623
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-835623
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-835623
--- PASS: TestAddons/StoppedEnableDisable (91.34s)

                                                
                                    
x
+
TestCertOptions (61.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-530977 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-530977 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m0.251711309s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-530977 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-530977 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-530977 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-530977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-530977
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-530977: (1.110191495s)
--- PASS: TestCertOptions (61.85s)

                                                
                                    
x
+
TestCertExpiration (287.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-705566 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-705566 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (42.987514426s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-705566 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-705566 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m3.983958268s)
helpers_test.go:175: Cleaning up "cert-expiration-705566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-705566
--- PASS: TestCertExpiration (287.90s)

                                                
                                    
x
+
TestForceSystemdFlag (105.09s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-042482 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-042482 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m43.798432689s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-042482 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-042482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-042482
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-042482: (1.07301256s)
--- PASS: TestForceSystemdFlag (105.09s)

                                                
                                    
x
+
TestForceSystemdEnv (65.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-466042 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-466042 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.174105737s)
helpers_test.go:175: Cleaning up "force-systemd-env-466042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-466042
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-466042: (1.052942995s)
--- PASS: TestForceSystemdEnv (65.23s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.04s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0408 19:16:41.082933  148487 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0408 19:16:41.083114  148487 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0408 19:16:41.116572  148487 install.go:62] docker-machine-driver-kvm2: exit status 1
W0408 19:16:41.116801  148487 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0408 19:16:41.116875  148487 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3456654696/001/docker-machine-driver-kvm2
I0408 19:16:41.329148  148487 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3456654696/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc000537548 gz:0xc0005375d0 tar:0xc000537580 tar.bz2:0xc000537590 tar.gz:0xc0005375a0 tar.xz:0xc0005375b0 tar.zst:0xc0005375c0 tbz2:0xc000537590 tgz:0xc0005375a0 txz:0xc0005375b0 tzst:0xc0005375c0 xz:0xc0005375d8 zip:0xc0005375e0 zst:0xc0005375f0] Getters:map[file:0xc002148890 http:0xc00054c8c0 https:0xc00054c910] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0408 19:16:41.329209  148487 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3456654696/001/docker-machine-driver-kvm2
I0408 19:16:43.351170  148487 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0408 19:16:43.351282  148487 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0408 19:16:43.388276  148487 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0408 19:16:43.388328  148487 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0408 19:16:43.388414  148487 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0408 19:16:43.388453  148487 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3456654696/002/docker-machine-driver-kvm2
I0408 19:16:43.454389  148487 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3456654696/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc000537548 gz:0xc0005375d0 tar:0xc000537580 tar.bz2:0xc000537590 tar.gz:0xc0005375a0 tar.xz:0xc0005375b0 tar.zst:0xc0005375c0 tbz2:0xc000537590 tgz:0xc0005375a0 txz:0xc0005375b0 tzst:0xc0005375c0 xz:0xc0005375d8 zip:0xc0005375e0 zst:0xc0005375f0] Getters:map[file:0xc000503220 http:0xc0023bc500 https:0xc0023bc550] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0408 19:16:43.454459  148487 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3456654696/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.04s)

                                                
                                    
x
+
TestErrorSpam/setup (41.36s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-864565 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-864565 --driver=kvm2  --container-runtime=crio
E0408 18:20:19.907921  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:20:19.914471  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:20:19.925987  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:20:19.947523  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:20:19.989078  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:20:20.070722  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:20:20.232327  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:20:20.554093  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:20:21.196191  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:20:22.477920  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:20:25.040924  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:20:30.162759  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:20:40.404990  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-864565 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-864565 --driver=kvm2  --container-runtime=crio: (41.361520503s)
--- PASS: TestErrorSpam/setup (41.36s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 pause
--- PASS: TestErrorSpam/pause (1.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (5.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 stop
E0408 18:21:00.886611  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 stop: (2.341382916s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 stop: (1.328420091s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-864565 --log_dir /tmp/nospam-864565 stop: (1.410668549s)
--- PASS: TestErrorSpam/stop (5.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20604-141129/.minikube/files/etc/test/nested/copy/148487/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-391629 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0408 18:21:41.849068  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-391629 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (57.421726368s)
--- PASS: TestFunctional/serial/StartWithProxy (57.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0408 18:22:01.755951  148487 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-391629 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-391629 --alsologtostderr -v=8: (40.071336931s)
functional_test.go:680: soft start took 40.071932134s for "functional-391629" cluster.
I0408 18:22:41.827672  148487 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (40.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-391629 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-391629 cache add registry.k8s.io/pause:3.1: (1.182210985s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-391629 cache add registry.k8s.io/pause:3.3: (1.196442463s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-391629 cache add registry.k8s.io/pause:latest: (1.183925553s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-391629 /tmp/TestFunctionalserialCacheCmdcacheadd_local4084290316/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 cache add minikube-local-cache-test:functional-391629
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-391629 cache add minikube-local-cache-test:functional-391629: (1.710298757s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 cache delete minikube-local-cache-test:functional-391629
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-391629
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-391629 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (225.975619ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-391629 cache reload: (1.032492231s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 kubectl -- --context functional-391629 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-391629 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-391629 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0408 18:23:03.773945  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-391629 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.862475333s)
functional_test.go:778: restart took 31.862625088s for "functional-391629" cluster.
I0408 18:23:21.885478  148487 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (31.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-391629 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-391629 logs: (1.406820635s)
--- PASS: TestFunctional/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 logs --file /tmp/TestFunctionalserialLogsFileCmd2856496078/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-391629 logs --file /tmp/TestFunctionalserialLogsFileCmd2856496078/001/logs.txt: (1.486306584s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-391629 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-391629
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-391629: exit status 115 (304.087395ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.60:30968 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-391629 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-391629 config get cpus: exit status 14 (63.127977ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-391629 config get cpus: exit status 14 (59.927673ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-391629 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-391629 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 157302: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.80s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-391629 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-391629 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.267847ms)

                                                
                                                
-- stdout --
	* [functional-391629] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20604
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:24:01.291557  157181 out.go:345] Setting OutFile to fd 1 ...
	I0408 18:24:01.291659  157181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:24:01.291664  157181 out.go:358] Setting ErrFile to fd 2...
	I0408 18:24:01.291668  157181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:24:01.291867  157181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 18:24:01.292474  157181 out.go:352] Setting JSON to false
	I0408 18:24:01.293431  157181 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7586,"bootTime":1744129055,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 18:24:01.293500  157181 start.go:139] virtualization: kvm guest
	I0408 18:24:01.295638  157181 out.go:177] * [functional-391629] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 18:24:01.297188  157181 notify.go:220] Checking for updates...
	I0408 18:24:01.297222  157181 out.go:177]   - MINIKUBE_LOCATION=20604
	I0408 18:24:01.298651  157181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:24:01.299986  157181 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 18:24:01.301459  157181 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 18:24:01.303060  157181 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 18:24:01.304709  157181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 18:24:01.306616  157181 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 18:24:01.307007  157181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:24:01.307084  157181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:24:01.322658  157181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39581
	I0408 18:24:01.323222  157181 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:24:01.323779  157181 main.go:141] libmachine: Using API Version  1
	I0408 18:24:01.323803  157181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:24:01.324195  157181 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:24:01.324396  157181 main.go:141] libmachine: (functional-391629) Calling .DriverName
	I0408 18:24:01.324640  157181 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 18:24:01.324936  157181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:24:01.325009  157181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:24:01.340733  157181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0408 18:24:01.341302  157181 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:24:01.341981  157181 main.go:141] libmachine: Using API Version  1
	I0408 18:24:01.342023  157181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:24:01.342432  157181 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:24:01.342661  157181 main.go:141] libmachine: (functional-391629) Calling .DriverName
	I0408 18:24:01.382017  157181 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 18:24:01.383631  157181 start.go:297] selected driver: kvm2
	I0408 18:24:01.383657  157181 start.go:901] validating driver "kvm2" against &{Name:functional-391629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-391629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:24:01.383814  157181 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 18:24:01.386860  157181 out.go:201] 
	W0408 18:24:01.388550  157181 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0408 18:24:01.390186  157181 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-391629 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-391629 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-391629 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (156.363752ms)

                                                
                                                
-- stdout --
	* [functional-391629] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20604
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:24:01.588960  157237 out.go:345] Setting OutFile to fd 1 ...
	I0408 18:24:01.589227  157237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:24:01.589238  157237 out.go:358] Setting ErrFile to fd 2...
	I0408 18:24:01.589243  157237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:24:01.589521  157237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 18:24:01.590112  157237 out.go:352] Setting JSON to false
	I0408 18:24:01.590991  157237 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7587,"bootTime":1744129055,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 18:24:01.591057  157237 start.go:139] virtualization: kvm guest
	I0408 18:24:01.593251  157237 out.go:177] * [functional-391629] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0408 18:24:01.595034  157237 out.go:177]   - MINIKUBE_LOCATION=20604
	I0408 18:24:01.595051  157237 notify.go:220] Checking for updates...
	I0408 18:24:01.598097  157237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 18:24:01.599762  157237 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 18:24:01.601345  157237 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 18:24:01.603038  157237 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 18:24:01.604532  157237 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 18:24:01.606619  157237 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 18:24:01.607039  157237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:24:01.607126  157237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:24:01.623381  157237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44499
	I0408 18:24:01.624012  157237 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:24:01.624723  157237 main.go:141] libmachine: Using API Version  1
	I0408 18:24:01.624741  157237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:24:01.625305  157237 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:24:01.625565  157237 main.go:141] libmachine: (functional-391629) Calling .DriverName
	I0408 18:24:01.625933  157237 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 18:24:01.626307  157237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:24:01.626368  157237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:24:01.643105  157237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45773
	I0408 18:24:01.643709  157237 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:24:01.644299  157237 main.go:141] libmachine: Using API Version  1
	I0408 18:24:01.644330  157237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:24:01.644760  157237 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:24:01.645058  157237 main.go:141] libmachine: (functional-391629) Calling .DriverName
	I0408 18:24:01.681912  157237 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0408 18:24:01.683623  157237 start.go:297] selected driver: kvm2
	I0408 18:24:01.683654  157237 start.go:901] validating driver "kvm2" against &{Name:functional-391629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-391629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.60 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 18:24:01.683818  157237 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 18:24:01.686656  157237 out.go:201] 
	W0408 18:24:01.688742  157237 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0408 18:24:01.690456  157237 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-391629 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-391629 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-kjhdx" [5f9ccdd3-6924-4de2-abed-939ecfe8164b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-kjhdx" [5f9ccdd3-6924-4de2-abed-939ecfe8164b] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003691853s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.60:31227
functional_test.go:1692: http://192.168.39.60:31227: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-kjhdx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.60:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.60:31227
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c3b683e6-cb1e-4198-b795-f76be8e26e58] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003543284s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-391629 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-391629 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-391629 get pvc myclaim -o=json
I0408 18:23:37.063203  148487 retry.go:31] will retry after 1.676196577s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:51d41432-3991-4326-8410-88707c35ce12 ResourceVersion:716 Generation:0 CreationTimestamp:2025-04-08 18:23:36 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-51d41432-3991-4326-8410-88707c35ce12 StorageClassName:0xc00174ad60 VolumeMode:0xc00174ad70 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-391629 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-391629 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6a1c6627-1d25-4fdb-8754-e262696a13fe] Pending
helpers_test.go:344: "sp-pod" [6a1c6627-1d25-4fdb-8754-e262696a13fe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6a1c6627-1d25-4fdb-8754-e262696a13fe] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003974817s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-391629 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-391629 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-391629 delete -f testdata/storage-provisioner/pod.yaml: (3.963828792s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-391629 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ad3e4365-cb13-4cd0-a6f4-5b4fa8a5159f] Pending
helpers_test.go:344: "sp-pod" [ad3e4365-cb13-4cd0-a6f4-5b4fa8a5159f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ad3e4365-cb13-4cd0-a6f4-5b4fa8a5159f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005004426s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-391629 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh -n functional-391629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 cp functional-391629:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3526481979/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh -n functional-391629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh -n functional-391629 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-391629 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-b7xp6" [f8cca5c4-e164-44fd-80ee-fd6e1366c3be] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-b7xp6" [f8cca5c4-e164-44fd-80ee-fd6e1366c3be] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.005893805s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-391629 exec mysql-58ccfd96bb-b7xp6 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-391629 exec mysql-58ccfd96bb-b7xp6 -- mysql -ppassword -e "show databases;": exit status 1 (158.469979ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0408 18:24:07.763680  148487 retry.go:31] will retry after 1.423976852s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-391629 exec mysql-58ccfd96bb-b7xp6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.93s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/148487/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "sudo cat /etc/test/nested/copy/148487/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/148487.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "sudo cat /etc/ssl/certs/148487.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/148487.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "sudo cat /usr/share/ca-certificates/148487.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/1484872.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "sudo cat /etc/ssl/certs/1484872.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/1484872.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "sudo cat /usr/share/ca-certificates/1484872.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-391629 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-391629 ssh "sudo systemctl is-active docker": exit status 1 (247.462341ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-391629 ssh "sudo systemctl is-active containerd": exit status 1 (235.691705ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-391629 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-391629 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-gf44c" [63670e9c-32a8-427d-83fd-a6451a254dbf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-gf44c" [63670e9c-32a8-427d-83fd-a6451a254dbf] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00328392s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-391629 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-391629 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-391629 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 155761: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-391629 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-391629 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-391629 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [96f2c372-b8fb-40c4-a770-328d7dc59a19] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [96f2c372-b8fb-40c4-a770-328d7dc59a19] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.003286551s
I0408 18:23:44.247116  148487 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image ls --format short --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-linux-amd64 -p functional-391629 image ls --format short --alsologtostderr: (1.151309421s)
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-391629 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-391629
localhost/kicbase/echo-server:functional-391629
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-391629 image ls --format short --alsologtostderr:
I0408 18:24:10.441285  157715 out.go:345] Setting OutFile to fd 1 ...
I0408 18:24:10.441551  157715 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 18:24:10.441562  157715 out.go:358] Setting ErrFile to fd 2...
I0408 18:24:10.441566  157715 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 18:24:10.441781  157715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
I0408 18:24:10.442492  157715 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0408 18:24:10.442593  157715 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0408 18:24:10.442959  157715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 18:24:10.443017  157715 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:24:10.459648  157715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
I0408 18:24:10.460322  157715 main.go:141] libmachine: () Calling .GetVersion
I0408 18:24:10.461075  157715 main.go:141] libmachine: Using API Version  1
I0408 18:24:10.461099  157715 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:24:10.461543  157715 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:24:10.461800  157715 main.go:141] libmachine: (functional-391629) Calling .GetState
I0408 18:24:10.463995  157715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 18:24:10.464055  157715 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:24:10.480758  157715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37683
I0408 18:24:10.481357  157715 main.go:141] libmachine: () Calling .GetVersion
I0408 18:24:10.481805  157715 main.go:141] libmachine: Using API Version  1
I0408 18:24:10.481849  157715 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:24:10.482240  157715 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:24:10.482445  157715 main.go:141] libmachine: (functional-391629) Calling .DriverName
I0408 18:24:10.482703  157715 ssh_runner.go:195] Run: systemctl --version
I0408 18:24:10.482736  157715 main.go:141] libmachine: (functional-391629) Calling .GetSSHHostname
I0408 18:24:10.486360  157715 main.go:141] libmachine: (functional-391629) DBG | domain functional-391629 has defined MAC address 52:54:00:4b:c2:9f in network mk-functional-391629
I0408 18:24:10.486752  157715 main.go:141] libmachine: (functional-391629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:c2:9f", ip: ""} in network mk-functional-391629: {Iface:virbr1 ExpiryTime:2025-04-08 19:21:19 +0000 UTC Type:0 Mac:52:54:00:4b:c2:9f Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:functional-391629 Clientid:01:52:54:00:4b:c2:9f}
I0408 18:24:10.486800  157715 main.go:141] libmachine: (functional-391629) DBG | domain functional-391629 has defined IP address 192.168.39.60 and MAC address 52:54:00:4b:c2:9f in network mk-functional-391629
I0408 18:24:10.486928  157715 main.go:141] libmachine: (functional-391629) Calling .GetSSHPort
I0408 18:24:10.487150  157715 main.go:141] libmachine: (functional-391629) Calling .GetSSHKeyPath
I0408 18:24:10.487375  157715 main.go:141] libmachine: (functional-391629) Calling .GetSSHUsername
I0408 18:24:10.487532  157715 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/functional-391629/id_rsa Username:docker}
I0408 18:24:10.630121  157715 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 18:24:11.533585  157715 main.go:141] libmachine: Making call to close driver server
I0408 18:24:11.533601  157715 main.go:141] libmachine: (functional-391629) Calling .Close
I0408 18:24:11.533970  157715 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:24:11.533991  157715 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:24:11.534001  157715 main.go:141] libmachine: Making call to close driver server
I0408 18:24:11.534009  157715 main.go:141] libmachine: (functional-391629) Calling .Close
I0408 18:24:11.534017  157715 main.go:141] libmachine: (functional-391629) DBG | Closing plugin on server side
I0408 18:24:11.534258  157715 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:24:11.534275  157715 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:24:11.534293  157715 main.go:141] libmachine: (functional-391629) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-391629 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | alpine             | 1ff4bb4faebcf | 49.3MB |
| localhost/minikube-local-cache-test     | functional-391629  | 1a9263fecf35e | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | latest             | 4cad75abc83d5 | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-391629  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-391629 image ls --format table --alsologtostderr:
I0408 18:24:11.986731  158037 out.go:345] Setting OutFile to fd 1 ...
I0408 18:24:11.987088  158037 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 18:24:11.987100  158037 out.go:358] Setting ErrFile to fd 2...
I0408 18:24:11.987107  158037 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 18:24:11.987426  158037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
I0408 18:24:11.988355  158037 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0408 18:24:11.988504  158037 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0408 18:24:11.989073  158037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 18:24:11.989159  158037 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:24:12.007436  158037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43821
I0408 18:24:12.007988  158037 main.go:141] libmachine: () Calling .GetVersion
I0408 18:24:12.008624  158037 main.go:141] libmachine: Using API Version  1
I0408 18:24:12.008652  158037 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:24:12.009138  158037 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:24:12.009389  158037 main.go:141] libmachine: (functional-391629) Calling .GetState
I0408 18:24:12.011603  158037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 18:24:12.011660  158037 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:24:12.028157  158037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35999
I0408 18:24:12.028623  158037 main.go:141] libmachine: () Calling .GetVersion
I0408 18:24:12.029202  158037 main.go:141] libmachine: Using API Version  1
I0408 18:24:12.029231  158037 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:24:12.029588  158037 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:24:12.029785  158037 main.go:141] libmachine: (functional-391629) Calling .DriverName
I0408 18:24:12.030013  158037 ssh_runner.go:195] Run: systemctl --version
I0408 18:24:12.030047  158037 main.go:141] libmachine: (functional-391629) Calling .GetSSHHostname
I0408 18:24:12.033241  158037 main.go:141] libmachine: (functional-391629) DBG | domain functional-391629 has defined MAC address 52:54:00:4b:c2:9f in network mk-functional-391629
I0408 18:24:12.033678  158037 main.go:141] libmachine: (functional-391629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:c2:9f", ip: ""} in network mk-functional-391629: {Iface:virbr1 ExpiryTime:2025-04-08 19:21:19 +0000 UTC Type:0 Mac:52:54:00:4b:c2:9f Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:functional-391629 Clientid:01:52:54:00:4b:c2:9f}
I0408 18:24:12.033714  158037 main.go:141] libmachine: (functional-391629) DBG | domain functional-391629 has defined IP address 192.168.39.60 and MAC address 52:54:00:4b:c2:9f in network mk-functional-391629
I0408 18:24:12.033888  158037 main.go:141] libmachine: (functional-391629) Calling .GetSSHPort
I0408 18:24:12.034078  158037 main.go:141] libmachine: (functional-391629) Calling .GetSSHKeyPath
I0408 18:24:12.034245  158037 main.go:141] libmachine: (functional-391629) Calling .GetSSHUsername
I0408 18:24:12.034396  158037 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/functional-391629/id_rsa Username:docker}
I0408 18:24:12.153679  158037 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 18:24:12.208825  158037 main.go:141] libmachine: Making call to close driver server
I0408 18:24:12.208844  158037 main.go:141] libmachine: (functional-391629) Calling .Close
I0408 18:24:12.209195  158037 main.go:141] libmachine: (functional-391629) DBG | Closing plugin on server side
I0408 18:24:12.209228  158037 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:24:12.209242  158037 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:24:12.209257  158037 main.go:141] libmachine: Making call to close driver server
I0408 18:24:12.209268  158037 main.go:141] libmachine: (functional-391629) Calling .Close
I0408 18:24:12.209527  158037 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:24:12.209548  158037 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:24:12.209561  158037 main.go:141] libmachine: (functional-391629) DBG | Closing plugin on server side
2025/04/08 18:24:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-391629 image ls --format json --alsologtostderr:
[{"id":"1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07","repoDigests":["docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591","docker.io/library/nginx@sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"49323988"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-391629"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa67
99fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"4cad75abc83d5ca6ee22053d85850676eaef657ee9d723d7bef61179e1e1e485","repoDigests":["docker.io/library/nginx@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab","docker.io/library/ngin
x@sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca"],"repoTags":["docker.io/library/nginx:latest"],"size":"196210580"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@
sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"1a9263fecf35ed07c31970cf71c
96dee2bc3239599a256c49459a3e84f9f5616","repoDigests":["localhost/minikube-local-cache-test@sha256:b9c7e27f346d8c222f17439828152d3415cae6ea5ec3c538e7f519bca8dc96de"],"repoTags":["localhost/minikube-local-cache-test:functional-391629"],"size":"3328"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a5
95f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["regis
try.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-391629 image ls --format json --alsologtostderr:
I0408 18:24:11.675146  157943 out.go:345] Setting OutFile to fd 1 ...
I0408 18:24:11.675266  157943 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 18:24:11.675275  157943 out.go:358] Setting ErrFile to fd 2...
I0408 18:24:11.675279  157943 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 18:24:11.675484  157943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
I0408 18:24:11.676107  157943 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0408 18:24:11.676232  157943 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0408 18:24:11.676653  157943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 18:24:11.676743  157943 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:24:11.697073  157943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43555
I0408 18:24:11.697599  157943 main.go:141] libmachine: () Calling .GetVersion
I0408 18:24:11.700485  157943 main.go:141] libmachine: Using API Version  1
I0408 18:24:11.700524  157943 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:24:11.701104  157943 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:24:11.701478  157943 main.go:141] libmachine: (functional-391629) Calling .GetState
I0408 18:24:11.703898  157943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 18:24:11.703990  157943 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:24:11.723182  157943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46305
I0408 18:24:11.723834  157943 main.go:141] libmachine: () Calling .GetVersion
I0408 18:24:11.724402  157943 main.go:141] libmachine: Using API Version  1
I0408 18:24:11.724428  157943 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:24:11.724948  157943 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:24:11.725136  157943 main.go:141] libmachine: (functional-391629) Calling .DriverName
I0408 18:24:11.725398  157943 ssh_runner.go:195] Run: systemctl --version
I0408 18:24:11.725430  157943 main.go:141] libmachine: (functional-391629) Calling .GetSSHHostname
I0408 18:24:11.728420  157943 main.go:141] libmachine: (functional-391629) DBG | domain functional-391629 has defined MAC address 52:54:00:4b:c2:9f in network mk-functional-391629
I0408 18:24:11.728855  157943 main.go:141] libmachine: (functional-391629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:c2:9f", ip: ""} in network mk-functional-391629: {Iface:virbr1 ExpiryTime:2025-04-08 19:21:19 +0000 UTC Type:0 Mac:52:54:00:4b:c2:9f Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:functional-391629 Clientid:01:52:54:00:4b:c2:9f}
I0408 18:24:11.728894  157943 main.go:141] libmachine: (functional-391629) DBG | domain functional-391629 has defined IP address 192.168.39.60 and MAC address 52:54:00:4b:c2:9f in network mk-functional-391629
I0408 18:24:11.729135  157943 main.go:141] libmachine: (functional-391629) Calling .GetSSHPort
I0408 18:24:11.729324  157943 main.go:141] libmachine: (functional-391629) Calling .GetSSHKeyPath
I0408 18:24:11.729563  157943 main.go:141] libmachine: (functional-391629) Calling .GetSSHUsername
I0408 18:24:11.729781  157943 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/functional-391629/id_rsa Username:docker}
I0408 18:24:11.833345  157943 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 18:24:11.919695  157943 main.go:141] libmachine: Making call to close driver server
I0408 18:24:11.919713  157943 main.go:141] libmachine: (functional-391629) Calling .Close
I0408 18:24:11.920002  157943 main.go:141] libmachine: (functional-391629) DBG | Closing plugin on server side
I0408 18:24:11.920036  157943 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:24:11.920052  157943 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:24:11.920092  157943 main.go:141] libmachine: Making call to close driver server
I0408 18:24:11.920106  157943 main.go:141] libmachine: (functional-391629) Calling .Close
I0408 18:24:11.920354  157943 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:24:11.920372  157943 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:24:11.920399  157943 main.go:141] libmachine: (functional-391629) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-391629 image ls --format yaml --alsologtostderr:
- id: 4cad75abc83d5ca6ee22053d85850676eaef657ee9d723d7bef61179e1e1e485
repoDigests:
- docker.io/library/nginx@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab
- docker.io/library/nginx@sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca
repoTags:
- docker.io/library/nginx:latest
size: "196210580"
- id: 1a9263fecf35ed07c31970cf71c96dee2bc3239599a256c49459a3e84f9f5616
repoDigests:
- localhost/minikube-local-cache-test@sha256:b9c7e27f346d8c222f17439828152d3415cae6ea5ec3c538e7f519bca8dc96de
repoTags:
- localhost/minikube-local-cache-test:functional-391629
size: "3328"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-391629
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: 1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07
repoDigests:
- docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591
- docker.io/library/nginx@sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc
repoTags:
- docker.io/library/nginx:alpine
size: "49323988"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-391629 image ls --format yaml --alsologtostderr:
I0408 18:24:11.410735  157877 out.go:345] Setting OutFile to fd 1 ...
I0408 18:24:11.410893  157877 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 18:24:11.410907  157877 out.go:358] Setting ErrFile to fd 2...
I0408 18:24:11.410913  157877 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 18:24:11.411195  157877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
I0408 18:24:11.412127  157877 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0408 18:24:11.412292  157877 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0408 18:24:11.412839  157877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 18:24:11.412921  157877 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:24:11.430872  157877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
I0408 18:24:11.431487  157877 main.go:141] libmachine: () Calling .GetVersion
I0408 18:24:11.432093  157877 main.go:141] libmachine: Using API Version  1
I0408 18:24:11.432119  157877 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:24:11.432493  157877 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:24:11.432767  157877 main.go:141] libmachine: (functional-391629) Calling .GetState
I0408 18:24:11.435078  157877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 18:24:11.435127  157877 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:24:11.452581  157877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
I0408 18:24:11.453066  157877 main.go:141] libmachine: () Calling .GetVersion
I0408 18:24:11.453549  157877 main.go:141] libmachine: Using API Version  1
I0408 18:24:11.453571  157877 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:24:11.454024  157877 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:24:11.454234  157877 main.go:141] libmachine: (functional-391629) Calling .DriverName
I0408 18:24:11.454469  157877 ssh_runner.go:195] Run: systemctl --version
I0408 18:24:11.454496  157877 main.go:141] libmachine: (functional-391629) Calling .GetSSHHostname
I0408 18:24:11.457919  157877 main.go:141] libmachine: (functional-391629) DBG | domain functional-391629 has defined MAC address 52:54:00:4b:c2:9f in network mk-functional-391629
I0408 18:24:11.458389  157877 main.go:141] libmachine: (functional-391629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:c2:9f", ip: ""} in network mk-functional-391629: {Iface:virbr1 ExpiryTime:2025-04-08 19:21:19 +0000 UTC Type:0 Mac:52:54:00:4b:c2:9f Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:functional-391629 Clientid:01:52:54:00:4b:c2:9f}
I0408 18:24:11.458427  157877 main.go:141] libmachine: (functional-391629) DBG | domain functional-391629 has defined IP address 192.168.39.60 and MAC address 52:54:00:4b:c2:9f in network mk-functional-391629
I0408 18:24:11.458551  157877 main.go:141] libmachine: (functional-391629) Calling .GetSSHPort
I0408 18:24:11.458747  157877 main.go:141] libmachine: (functional-391629) Calling .GetSSHKeyPath
I0408 18:24:11.458895  157877 main.go:141] libmachine: (functional-391629) Calling .GetSSHUsername
I0408 18:24:11.459098  157877 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/functional-391629/id_rsa Username:docker}
I0408 18:24:11.547618  157877 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 18:24:11.607012  157877 main.go:141] libmachine: Making call to close driver server
I0408 18:24:11.607028  157877 main.go:141] libmachine: (functional-391629) Calling .Close
I0408 18:24:11.607411  157877 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:24:11.607424  157877 main.go:141] libmachine: (functional-391629) DBG | Closing plugin on server side
I0408 18:24:11.607432  157877 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:24:11.607478  157877 main.go:141] libmachine: Making call to close driver server
I0408 18:24:11.607489  157877 main.go:141] libmachine: (functional-391629) Calling .Close
I0408 18:24:11.607847  157877 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:24:11.607883  157877 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:24:11.607849  157877 main.go:141] libmachine: (functional-391629) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-391629 ssh pgrep buildkitd: exit status 1 (254.404106ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image build -t localhost/my-image:functional-391629 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-391629 image build -t localhost/my-image:functional-391629 testdata/build --alsologtostderr: (4.013531946s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-391629 image build -t localhost/my-image:functional-391629 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> eb32f88f3cc
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-391629
--> fb21c478545
Successfully tagged localhost/my-image:functional-391629
fb21c47854530c37c1eb41d8dee46abfc3ec07ea1d98fd206fc9c289e31a3dad
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-391629 image build -t localhost/my-image:functional-391629 testdata/build --alsologtostderr:
I0408 18:24:11.851232  158008 out.go:345] Setting OutFile to fd 1 ...
I0408 18:24:11.851518  158008 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 18:24:11.851529  158008 out.go:358] Setting ErrFile to fd 2...
I0408 18:24:11.851533  158008 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 18:24:11.851763  158008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
I0408 18:24:11.852357  158008 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0408 18:24:11.853005  158008 config.go:182] Loaded profile config "functional-391629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0408 18:24:11.853375  158008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 18:24:11.853428  158008 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:24:11.870548  158008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46733
I0408 18:24:11.871105  158008 main.go:141] libmachine: () Calling .GetVersion
I0408 18:24:11.871672  158008 main.go:141] libmachine: Using API Version  1
I0408 18:24:11.871695  158008 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:24:11.872195  158008 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:24:11.872462  158008 main.go:141] libmachine: (functional-391629) Calling .GetState
I0408 18:24:11.874912  158008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 18:24:11.874976  158008 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:24:11.891395  158008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
I0408 18:24:11.891963  158008 main.go:141] libmachine: () Calling .GetVersion
I0408 18:24:11.892458  158008 main.go:141] libmachine: Using API Version  1
I0408 18:24:11.892482  158008 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:24:11.892908  158008 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:24:11.893155  158008 main.go:141] libmachine: (functional-391629) Calling .DriverName
I0408 18:24:11.893403  158008 ssh_runner.go:195] Run: systemctl --version
I0408 18:24:11.893435  158008 main.go:141] libmachine: (functional-391629) Calling .GetSSHHostname
I0408 18:24:11.896148  158008 main.go:141] libmachine: (functional-391629) DBG | domain functional-391629 has defined MAC address 52:54:00:4b:c2:9f in network mk-functional-391629
I0408 18:24:11.896571  158008 main.go:141] libmachine: (functional-391629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:c2:9f", ip: ""} in network mk-functional-391629: {Iface:virbr1 ExpiryTime:2025-04-08 19:21:19 +0000 UTC Type:0 Mac:52:54:00:4b:c2:9f Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:functional-391629 Clientid:01:52:54:00:4b:c2:9f}
I0408 18:24:11.896607  158008 main.go:141] libmachine: (functional-391629) DBG | domain functional-391629 has defined IP address 192.168.39.60 and MAC address 52:54:00:4b:c2:9f in network mk-functional-391629
I0408 18:24:11.896710  158008 main.go:141] libmachine: (functional-391629) Calling .GetSSHPort
I0408 18:24:11.896911  158008 main.go:141] libmachine: (functional-391629) Calling .GetSSHKeyPath
I0408 18:24:11.897057  158008 main.go:141] libmachine: (functional-391629) Calling .GetSSHUsername
I0408 18:24:11.897227  158008 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/functional-391629/id_rsa Username:docker}
I0408 18:24:12.001456  158008 build_images.go:161] Building image from path: /tmp/build.1856059304.tar
I0408 18:24:12.001591  158008 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0408 18:24:12.022966  158008 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1856059304.tar
I0408 18:24:12.027653  158008 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1856059304.tar: stat -c "%s %y" /var/lib/minikube/build/build.1856059304.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1856059304.tar': No such file or directory
I0408 18:24:12.027711  158008 ssh_runner.go:362] scp /tmp/build.1856059304.tar --> /var/lib/minikube/build/build.1856059304.tar (3072 bytes)
I0408 18:24:12.075204  158008 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1856059304
I0408 18:24:12.089976  158008 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1856059304 -xf /var/lib/minikube/build/build.1856059304.tar
I0408 18:24:12.102846  158008 crio.go:315] Building image: /var/lib/minikube/build/build.1856059304
I0408 18:24:12.102975  158008 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-391629 /var/lib/minikube/build/build.1856059304 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0408 18:24:15.778740  158008 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-391629 /var/lib/minikube/build/build.1856059304 --cgroup-manager=cgroupfs: (3.675732329s)
I0408 18:24:15.778822  158008 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1856059304
I0408 18:24:15.790387  158008 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1856059304.tar
I0408 18:24:15.803228  158008 build_images.go:217] Built localhost/my-image:functional-391629 from /tmp/build.1856059304.tar
I0408 18:24:15.803269  158008 build_images.go:133] succeeded building to: functional-391629
I0408 18:24:15.803274  158008 build_images.go:134] failed building to: 
I0408 18:24:15.803324  158008 main.go:141] libmachine: Making call to close driver server
I0408 18:24:15.803337  158008 main.go:141] libmachine: (functional-391629) Calling .Close
I0408 18:24:15.803659  158008 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:24:15.803683  158008 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:24:15.803693  158008 main.go:141] libmachine: Making call to close driver server
I0408 18:24:15.803700  158008 main.go:141] libmachine: (functional-391629) Calling .Close
I0408 18:24:15.803700  158008 main.go:141] libmachine: (functional-391629) DBG | Closing plugin on server side
I0408 18:24:15.803944  158008 main.go:141] libmachine: Successfully made call to close driver server
I0408 18:24:15.803959  158008 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 18:24:15.803977  158008 main.go:141] libmachine: (functional-391629) DBG | Closing plugin on server side
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.554837815s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-391629
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 service list -o json
functional_test.go:1511: Took "455.482824ms" to run "out/minikube-linux-amd64 -p functional-391629 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.60:32009
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.60:32009
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image load --daemon kicbase/echo-server:functional-391629 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-391629 image load --daemon kicbase/echo-server:functional-391629 --alsologtostderr: (3.338837766s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-391629 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.1.45 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-391629 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "407.96419ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "56.48105ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "350.653151ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "51.779651ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (21.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-391629 /tmp/TestFunctionalparallelMountCmdany-port4201646855/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744136626753629163" to /tmp/TestFunctionalparallelMountCmdany-port4201646855/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744136626753629163" to /tmp/TestFunctionalparallelMountCmdany-port4201646855/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744136626753629163" to /tmp/TestFunctionalparallelMountCmdany-port4201646855/001/test-1744136626753629163
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-391629 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (235.058761ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0408 18:23:46.989626  148487 retry.go:31] will retry after 673.984428ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  8 18:23 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  8 18:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  8 18:23 test-1744136626753629163
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh cat /mount-9p/test-1744136626753629163
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-391629 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [334dd8d2-1196-4ea7-9606-bfd4db5af065] Pending
helpers_test.go:344: "busybox-mount" [334dd8d2-1196-4ea7-9606-bfd4db5af065] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [334dd8d2-1196-4ea7-9606-bfd4db5af065] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [334dd8d2-1196-4ea7-9606-bfd4db5af065] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 19.003534684s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-391629 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-391629 /tmp/TestFunctionalparallelMountCmdany-port4201646855/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (21.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image load --daemon kicbase/echo-server:functional-391629 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-391629
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image load --daemon kicbase/echo-server:functional-391629 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image save kicbase/echo-server:functional-391629 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image rm kicbase/echo-server:functional-391629 --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-391629 image rm kicbase/echo-server:functional-391629 --alsologtostderr: (3.381069158s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: (dbg) Done: out/minikube-linux-amd64 -p functional-391629 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.724765238s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-391629
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 image save --daemon kicbase/echo-server:functional-391629 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-391629
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-391629 /tmp/TestFunctionalparallelMountCmdspecific-port4197005207/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-391629 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.141745ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0408 18:24:08.777943  148487 retry.go:31] will retry after 494.55846ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-391629 /tmp/TestFunctionalparallelMountCmdspecific-port4197005207/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-391629 ssh "sudo umount -f /mount-9p": exit status 1 (309.765821ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-391629 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-391629 /tmp/TestFunctionalparallelMountCmdspecific-port4197005207/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-391629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2963133374/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-391629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2963133374/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-391629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2963133374/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-391629 ssh "findmnt -T" /mount1: exit status 1 (328.002633ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0408 18:24:10.821596  148487 retry.go:31] will retry after 386.064365ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-391629 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-391629 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-391629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2963133374/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-391629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2963133374/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-391629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2963133374/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-391629
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-391629
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-391629
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-509143 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0408 18:25:19.902135  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:25:47.616080  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-509143 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m26.371447339s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (207.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-509143 -- rollout status deployment/busybox: (5.353710188s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-4wfjn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-8zvng -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-wt8j6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-4wfjn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-8zvng -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-wt8j6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-4wfjn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-8zvng -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-wt8j6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-4wfjn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-4wfjn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-8zvng -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-8zvng -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-wt8j6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-509143 -- exec busybox-58667487b6-wt8j6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-509143 -v=7 --alsologtostderr
E0408 18:28:30.239132  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:28:30.245695  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:28:30.257270  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:28:30.279614  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:28:30.321150  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:28:30.402731  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:28:30.564347  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:28:30.886018  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:28:31.527904  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:28:32.809546  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:28:35.370987  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:28:40.492987  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-509143 -v=7 --alsologtostderr: (55.713866421s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-509143 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0408 18:28:50.735192  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp testdata/cp-test.txt ha-509143:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3491085081/001/cp-test_ha-509143.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143:/home/docker/cp-test.txt ha-509143-m02:/home/docker/cp-test_ha-509143_ha-509143-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m02 "sudo cat /home/docker/cp-test_ha-509143_ha-509143-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143:/home/docker/cp-test.txt ha-509143-m03:/home/docker/cp-test_ha-509143_ha-509143-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m03 "sudo cat /home/docker/cp-test_ha-509143_ha-509143-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143:/home/docker/cp-test.txt ha-509143-m04:/home/docker/cp-test_ha-509143_ha-509143-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m04 "sudo cat /home/docker/cp-test_ha-509143_ha-509143-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp testdata/cp-test.txt ha-509143-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3491085081/001/cp-test_ha-509143-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143-m02:/home/docker/cp-test.txt ha-509143:/home/docker/cp-test_ha-509143-m02_ha-509143.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143 "sudo cat /home/docker/cp-test_ha-509143-m02_ha-509143.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143-m02:/home/docker/cp-test.txt ha-509143-m03:/home/docker/cp-test_ha-509143-m02_ha-509143-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m03 "sudo cat /home/docker/cp-test_ha-509143-m02_ha-509143-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143-m02:/home/docker/cp-test.txt ha-509143-m04:/home/docker/cp-test_ha-509143-m02_ha-509143-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m04 "sudo cat /home/docker/cp-test_ha-509143-m02_ha-509143-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp testdata/cp-test.txt ha-509143-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3491085081/001/cp-test_ha-509143-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143-m03:/home/docker/cp-test.txt ha-509143:/home/docker/cp-test_ha-509143-m03_ha-509143.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143 "sudo cat /home/docker/cp-test_ha-509143-m03_ha-509143.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143-m03:/home/docker/cp-test.txt ha-509143-m02:/home/docker/cp-test_ha-509143-m03_ha-509143-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m02 "sudo cat /home/docker/cp-test_ha-509143-m03_ha-509143-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143-m03:/home/docker/cp-test.txt ha-509143-m04:/home/docker/cp-test_ha-509143-m03_ha-509143-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m04 "sudo cat /home/docker/cp-test_ha-509143-m03_ha-509143-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp testdata/cp-test.txt ha-509143-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3491085081/001/cp-test_ha-509143-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143-m04:/home/docker/cp-test.txt ha-509143:/home/docker/cp-test_ha-509143-m04_ha-509143.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143 "sudo cat /home/docker/cp-test_ha-509143-m04_ha-509143.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143-m04:/home/docker/cp-test.txt ha-509143-m02:/home/docker/cp-test_ha-509143-m04_ha-509143-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m02 "sudo cat /home/docker/cp-test_ha-509143-m04_ha-509143-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 cp ha-509143-m04:/home/docker/cp-test.txt ha-509143-m03:/home/docker/cp-test_ha-509143-m04_ha-509143-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 ssh -n ha-509143-m03 "sudo cat /home/docker/cp-test_ha-509143-m04_ha-509143-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 node stop m02 -v=7 --alsologtostderr
E0408 18:29:11.217337  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:29:52.179230  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:30:19.901932  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-509143 node stop m02 -v=7 --alsologtostderr: (1m31.050642422s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-509143 status -v=7 --alsologtostderr: exit status 7 (673.085285ms)

                                                
                                                
-- stdout --
	ha-509143
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-509143-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-509143-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-509143-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:30:35.831229  162690 out.go:345] Setting OutFile to fd 1 ...
	I0408 18:30:35.831524  162690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:30:35.831536  162690 out.go:358] Setting ErrFile to fd 2...
	I0408 18:30:35.831540  162690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:30:35.831835  162690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 18:30:35.832031  162690 out.go:352] Setting JSON to false
	I0408 18:30:35.832065  162690 mustload.go:65] Loading cluster: ha-509143
	I0408 18:30:35.832140  162690 notify.go:220] Checking for updates...
	I0408 18:30:35.832497  162690 config.go:182] Loaded profile config "ha-509143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 18:30:35.832527  162690 status.go:174] checking status of ha-509143 ...
	I0408 18:30:35.833038  162690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:30:35.833141  162690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:30:35.854887  162690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45133
	I0408 18:30:35.855519  162690 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:30:35.856198  162690 main.go:141] libmachine: Using API Version  1
	I0408 18:30:35.856233  162690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:30:35.856733  162690 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:30:35.856958  162690 main.go:141] libmachine: (ha-509143) Calling .GetState
	I0408 18:30:35.859053  162690 status.go:371] ha-509143 host status = "Running" (err=<nil>)
	I0408 18:30:35.859078  162690 host.go:66] Checking if "ha-509143" exists ...
	I0408 18:30:35.859445  162690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:30:35.859495  162690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:30:35.876119  162690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41737
	I0408 18:30:35.876633  162690 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:30:35.877267  162690 main.go:141] libmachine: Using API Version  1
	I0408 18:30:35.877305  162690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:30:35.877741  162690 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:30:35.878012  162690 main.go:141] libmachine: (ha-509143) Calling .GetIP
	I0408 18:30:35.881077  162690 main.go:141] libmachine: (ha-509143) DBG | domain ha-509143 has defined MAC address 52:54:00:71:fc:48 in network mk-ha-509143
	I0408 18:30:35.881507  162690 main.go:141] libmachine: (ha-509143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:fc:48", ip: ""} in network mk-ha-509143: {Iface:virbr1 ExpiryTime:2025-04-08 19:24:32 +0000 UTC Type:0 Mac:52:54:00:71:fc:48 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-509143 Clientid:01:52:54:00:71:fc:48}
	I0408 18:30:35.881543  162690 main.go:141] libmachine: (ha-509143) DBG | domain ha-509143 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:fc:48 in network mk-ha-509143
	I0408 18:30:35.881688  162690 host.go:66] Checking if "ha-509143" exists ...
	I0408 18:30:35.882197  162690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:30:35.882284  162690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:30:35.898804  162690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46201
	I0408 18:30:35.899309  162690 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:30:35.899802  162690 main.go:141] libmachine: Using API Version  1
	I0408 18:30:35.899834  162690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:30:35.900355  162690 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:30:35.900629  162690 main.go:141] libmachine: (ha-509143) Calling .DriverName
	I0408 18:30:35.900900  162690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:30:35.900956  162690 main.go:141] libmachine: (ha-509143) Calling .GetSSHHostname
	I0408 18:30:35.904685  162690 main.go:141] libmachine: (ha-509143) DBG | domain ha-509143 has defined MAC address 52:54:00:71:fc:48 in network mk-ha-509143
	I0408 18:30:35.905571  162690 main.go:141] libmachine: (ha-509143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:fc:48", ip: ""} in network mk-ha-509143: {Iface:virbr1 ExpiryTime:2025-04-08 19:24:32 +0000 UTC Type:0 Mac:52:54:00:71:fc:48 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:ha-509143 Clientid:01:52:54:00:71:fc:48}
	I0408 18:30:35.905637  162690 main.go:141] libmachine: (ha-509143) DBG | domain ha-509143 has defined IP address 192.168.39.145 and MAC address 52:54:00:71:fc:48 in network mk-ha-509143
	I0408 18:30:35.905892  162690 main.go:141] libmachine: (ha-509143) Calling .GetSSHPort
	I0408 18:30:35.906181  162690 main.go:141] libmachine: (ha-509143) Calling .GetSSHKeyPath
	I0408 18:30:35.906407  162690 main.go:141] libmachine: (ha-509143) Calling .GetSSHUsername
	I0408 18:30:35.906693  162690 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/ha-509143/id_rsa Username:docker}
	I0408 18:30:35.997904  162690 ssh_runner.go:195] Run: systemctl --version
	I0408 18:30:36.004711  162690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:30:36.023814  162690 kubeconfig.go:125] found "ha-509143" server: "https://192.168.39.254:8443"
	I0408 18:30:36.023874  162690 api_server.go:166] Checking apiserver status ...
	I0408 18:30:36.023969  162690 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 18:30:36.040017  162690 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0408 18:30:36.051980  162690 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 18:30:36.052046  162690 ssh_runner.go:195] Run: ls
	I0408 18:30:36.056884  162690 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 18:30:36.062953  162690 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 18:30:36.062989  162690 status.go:463] ha-509143 apiserver status = Running (err=<nil>)
	I0408 18:30:36.063005  162690 status.go:176] ha-509143 status: &{Name:ha-509143 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:30:36.063024  162690 status.go:174] checking status of ha-509143-m02 ...
	I0408 18:30:36.063423  162690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:30:36.063480  162690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:30:36.079121  162690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0408 18:30:36.079610  162690 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:30:36.080038  162690 main.go:141] libmachine: Using API Version  1
	I0408 18:30:36.080059  162690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:30:36.080500  162690 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:30:36.080692  162690 main.go:141] libmachine: (ha-509143-m02) Calling .GetState
	I0408 18:30:36.082798  162690 status.go:371] ha-509143-m02 host status = "Stopped" (err=<nil>)
	I0408 18:30:36.082818  162690 status.go:384] host is not running, skipping remaining checks
	I0408 18:30:36.082825  162690 status.go:176] ha-509143-m02 status: &{Name:ha-509143-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:30:36.082842  162690 status.go:174] checking status of ha-509143-m03 ...
	I0408 18:30:36.083116  162690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:30:36.083155  162690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:30:36.099663  162690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43479
	I0408 18:30:36.100162  162690 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:30:36.100647  162690 main.go:141] libmachine: Using API Version  1
	I0408 18:30:36.100670  162690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:30:36.101093  162690 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:30:36.101443  162690 main.go:141] libmachine: (ha-509143-m03) Calling .GetState
	I0408 18:30:36.103669  162690 status.go:371] ha-509143-m03 host status = "Running" (err=<nil>)
	I0408 18:30:36.103693  162690 host.go:66] Checking if "ha-509143-m03" exists ...
	I0408 18:30:36.104139  162690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:30:36.104201  162690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:30:36.120628  162690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40943
	I0408 18:30:36.121208  162690 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:30:36.121722  162690 main.go:141] libmachine: Using API Version  1
	I0408 18:30:36.121745  162690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:30:36.122286  162690 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:30:36.122528  162690 main.go:141] libmachine: (ha-509143-m03) Calling .GetIP
	I0408 18:30:36.126321  162690 main.go:141] libmachine: (ha-509143-m03) DBG | domain ha-509143-m03 has defined MAC address 52:54:00:f5:b5:16 in network mk-ha-509143
	I0408 18:30:36.126845  162690 main.go:141] libmachine: (ha-509143-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:b5:16", ip: ""} in network mk-ha-509143: {Iface:virbr1 ExpiryTime:2025-04-08 19:26:37 +0000 UTC Type:0 Mac:52:54:00:f5:b5:16 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-509143-m03 Clientid:01:52:54:00:f5:b5:16}
	I0408 18:30:36.126871  162690 main.go:141] libmachine: (ha-509143-m03) DBG | domain ha-509143-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:f5:b5:16 in network mk-ha-509143
	I0408 18:30:36.127054  162690 host.go:66] Checking if "ha-509143-m03" exists ...
	I0408 18:30:36.127415  162690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:30:36.127466  162690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:30:36.143869  162690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0408 18:30:36.144383  162690 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:30:36.144951  162690 main.go:141] libmachine: Using API Version  1
	I0408 18:30:36.144981  162690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:30:36.145440  162690 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:30:36.145659  162690 main.go:141] libmachine: (ha-509143-m03) Calling .DriverName
	I0408 18:30:36.145904  162690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:30:36.145934  162690 main.go:141] libmachine: (ha-509143-m03) Calling .GetSSHHostname
	I0408 18:30:36.149131  162690 main.go:141] libmachine: (ha-509143-m03) DBG | domain ha-509143-m03 has defined MAC address 52:54:00:f5:b5:16 in network mk-ha-509143
	I0408 18:30:36.149746  162690 main.go:141] libmachine: (ha-509143-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:b5:16", ip: ""} in network mk-ha-509143: {Iface:virbr1 ExpiryTime:2025-04-08 19:26:37 +0000 UTC Type:0 Mac:52:54:00:f5:b5:16 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-509143-m03 Clientid:01:52:54:00:f5:b5:16}
	I0408 18:30:36.149779  162690 main.go:141] libmachine: (ha-509143-m03) DBG | domain ha-509143-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:f5:b5:16 in network mk-ha-509143
	I0408 18:30:36.149944  162690 main.go:141] libmachine: (ha-509143-m03) Calling .GetSSHPort
	I0408 18:30:36.150163  162690 main.go:141] libmachine: (ha-509143-m03) Calling .GetSSHKeyPath
	I0408 18:30:36.150328  162690 main.go:141] libmachine: (ha-509143-m03) Calling .GetSSHUsername
	I0408 18:30:36.150533  162690 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/ha-509143-m03/id_rsa Username:docker}
	I0408 18:30:36.229523  162690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:30:36.244671  162690 kubeconfig.go:125] found "ha-509143" server: "https://192.168.39.254:8443"
	I0408 18:30:36.244707  162690 api_server.go:166] Checking apiserver status ...
	I0408 18:30:36.244752  162690 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 18:30:36.258889  162690 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1497/cgroup
	W0408 18:30:36.269100  162690 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1497/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 18:30:36.269166  162690 ssh_runner.go:195] Run: ls
	I0408 18:30:36.273217  162690 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 18:30:36.278894  162690 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 18:30:36.278928  162690 status.go:463] ha-509143-m03 apiserver status = Running (err=<nil>)
	I0408 18:30:36.278940  162690 status.go:176] ha-509143-m03 status: &{Name:ha-509143-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:30:36.278981  162690 status.go:174] checking status of ha-509143-m04 ...
	I0408 18:30:36.279331  162690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:30:36.279383  162690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:30:36.294981  162690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I0408 18:30:36.295452  162690 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:30:36.295996  162690 main.go:141] libmachine: Using API Version  1
	I0408 18:30:36.296020  162690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:30:36.296431  162690 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:30:36.296660  162690 main.go:141] libmachine: (ha-509143-m04) Calling .GetState
	I0408 18:30:36.298693  162690 status.go:371] ha-509143-m04 host status = "Running" (err=<nil>)
	I0408 18:30:36.298716  162690 host.go:66] Checking if "ha-509143-m04" exists ...
	I0408 18:30:36.299026  162690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:30:36.299068  162690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:30:36.315342  162690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33903
	I0408 18:30:36.315801  162690 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:30:36.316283  162690 main.go:141] libmachine: Using API Version  1
	I0408 18:30:36.316317  162690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:30:36.316674  162690 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:30:36.316857  162690 main.go:141] libmachine: (ha-509143-m04) Calling .GetIP
	I0408 18:30:36.320206  162690 main.go:141] libmachine: (ha-509143-m04) DBG | domain ha-509143-m04 has defined MAC address 52:54:00:55:bc:df in network mk-ha-509143
	I0408 18:30:36.320675  162690 main.go:141] libmachine: (ha-509143-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:bc:df", ip: ""} in network mk-ha-509143: {Iface:virbr1 ExpiryTime:2025-04-08 19:28:08 +0000 UTC Type:0 Mac:52:54:00:55:bc:df Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-509143-m04 Clientid:01:52:54:00:55:bc:df}
	I0408 18:30:36.320700  162690 main.go:141] libmachine: (ha-509143-m04) DBG | domain ha-509143-m04 has defined IP address 192.168.39.28 and MAC address 52:54:00:55:bc:df in network mk-ha-509143
	I0408 18:30:36.320823  162690 host.go:66] Checking if "ha-509143-m04" exists ...
	I0408 18:30:36.321169  162690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:30:36.321229  162690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:30:36.337507  162690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39173
	I0408 18:30:36.337975  162690 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:30:36.338533  162690 main.go:141] libmachine: Using API Version  1
	I0408 18:30:36.338561  162690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:30:36.338946  162690 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:30:36.339180  162690 main.go:141] libmachine: (ha-509143-m04) Calling .DriverName
	I0408 18:30:36.339391  162690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:30:36.339419  162690 main.go:141] libmachine: (ha-509143-m04) Calling .GetSSHHostname
	I0408 18:30:36.343148  162690 main.go:141] libmachine: (ha-509143-m04) DBG | domain ha-509143-m04 has defined MAC address 52:54:00:55:bc:df in network mk-ha-509143
	I0408 18:30:36.343597  162690 main.go:141] libmachine: (ha-509143-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:bc:df", ip: ""} in network mk-ha-509143: {Iface:virbr1 ExpiryTime:2025-04-08 19:28:08 +0000 UTC Type:0 Mac:52:54:00:55:bc:df Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-509143-m04 Clientid:01:52:54:00:55:bc:df}
	I0408 18:30:36.343623  162690 main.go:141] libmachine: (ha-509143-m04) DBG | domain ha-509143-m04 has defined IP address 192.168.39.28 and MAC address 52:54:00:55:bc:df in network mk-ha-509143
	I0408 18:30:36.343867  162690 main.go:141] libmachine: (ha-509143-m04) Calling .GetSSHPort
	I0408 18:30:36.344094  162690 main.go:141] libmachine: (ha-509143-m04) Calling .GetSSHKeyPath
	I0408 18:30:36.344267  162690 main.go:141] libmachine: (ha-509143-m04) Calling .GetSSHUsername
	I0408 18:30:36.344447  162690 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/ha-509143-m04/id_rsa Username:docker}
	I0408 18:30:36.429634  162690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:30:36.444429  162690 status.go:176] ha-509143-m04 status: &{Name:ha-509143-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (45.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 node start m02 -v=7 --alsologtostderr
E0408 18:31:14.101434  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-509143 node start m02 -v=7 --alsologtostderr: (44.532981604s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (45.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (469s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-509143 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-509143 -v=7 --alsologtostderr
E0408 18:33:30.239142  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:33:57.943496  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:35:19.901565  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-509143 -v=7 --alsologtostderr: (4m34.512449389s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-509143 --wait=true -v=7 --alsologtostderr
E0408 18:36:42.979783  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:38:30.239388  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-509143 --wait=true -v=7 --alsologtostderr: (3m14.370845151s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-509143
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (469.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-509143 node delete m03 -v=7 --alsologtostderr: (18.040013292s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (273.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 stop -v=7 --alsologtostderr
E0408 18:40:19.901912  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:43:30.239105  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-509143 stop -v=7 --alsologtostderr: (4m33.080291285s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-509143 status -v=7 --alsologtostderr: exit status 7 (122.600244ms)

                                                
                                                
-- stdout --
	ha-509143
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-509143-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-509143-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:44:05.345038  167112 out.go:345] Setting OutFile to fd 1 ...
	I0408 18:44:05.345175  167112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:44:05.345185  167112 out.go:358] Setting ErrFile to fd 2...
	I0408 18:44:05.345189  167112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:44:05.345436  167112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 18:44:05.345644  167112 out.go:352] Setting JSON to false
	I0408 18:44:05.345692  167112 mustload.go:65] Loading cluster: ha-509143
	I0408 18:44:05.345818  167112 notify.go:220] Checking for updates...
	I0408 18:44:05.346338  167112 config.go:182] Loaded profile config "ha-509143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 18:44:05.346376  167112 status.go:174] checking status of ha-509143 ...
	I0408 18:44:05.346935  167112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:44:05.347006  167112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:44:05.367083  167112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44223
	I0408 18:44:05.367617  167112 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:44:05.368321  167112 main.go:141] libmachine: Using API Version  1
	I0408 18:44:05.368362  167112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:44:05.368745  167112 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:44:05.368937  167112 main.go:141] libmachine: (ha-509143) Calling .GetState
	I0408 18:44:05.371060  167112 status.go:371] ha-509143 host status = "Stopped" (err=<nil>)
	I0408 18:44:05.371080  167112 status.go:384] host is not running, skipping remaining checks
	I0408 18:44:05.371087  167112 status.go:176] ha-509143 status: &{Name:ha-509143 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:44:05.371122  167112 status.go:174] checking status of ha-509143-m02 ...
	I0408 18:44:05.371469  167112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:44:05.371518  167112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:44:05.388239  167112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35997
	I0408 18:44:05.388796  167112 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:44:05.389354  167112 main.go:141] libmachine: Using API Version  1
	I0408 18:44:05.389381  167112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:44:05.389740  167112 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:44:05.390015  167112 main.go:141] libmachine: (ha-509143-m02) Calling .GetState
	I0408 18:44:05.391930  167112 status.go:371] ha-509143-m02 host status = "Stopped" (err=<nil>)
	I0408 18:44:05.391951  167112 status.go:384] host is not running, skipping remaining checks
	I0408 18:44:05.391959  167112 status.go:176] ha-509143-m02 status: &{Name:ha-509143-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:44:05.391983  167112 status.go:174] checking status of ha-509143-m04 ...
	I0408 18:44:05.392308  167112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:44:05.392360  167112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:44:05.409574  167112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0408 18:44:05.410217  167112 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:44:05.410735  167112 main.go:141] libmachine: Using API Version  1
	I0408 18:44:05.410767  167112 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:44:05.411135  167112 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:44:05.411380  167112 main.go:141] libmachine: (ha-509143-m04) Calling .GetState
	I0408 18:44:05.413642  167112 status.go:371] ha-509143-m04 host status = "Stopped" (err=<nil>)
	I0408 18:44:05.413669  167112 status.go:384] host is not running, skipping remaining checks
	I0408 18:44:05.413677  167112 status.go:176] ha-509143-m04 status: &{Name:ha-509143-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (273.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (126.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-509143 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0408 18:44:53.305714  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:45:19.902599  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-509143 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m5.345994523s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (126.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-509143 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-509143 --control-plane -v=7 --alsologtostderr: (1m16.04459211s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-509143 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-154196 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0408 18:48:30.239301  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-154196 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m24.379986805s)
--- PASS: TestJSONOutput/start/Command (84.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-154196 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-154196 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-154196 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-154196 --output=json --user=testUser: (7.356792189s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-316893 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-316893 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.49796ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0af6523d-84de-4d84-9230-65231e26f70d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-316893] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a48caf6-a8bf-47a8-8528-0c3460133c5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20604"}}
	{"specversion":"1.0","id":"7d3c2b5f-c094-45f6-83f8-e05667d95c2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d21993fd-b40e-4393-b20a-152090018ec6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig"}}
	{"specversion":"1.0","id":"37833358-3de3-45e1-a6cc-436345504f2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube"}}
	{"specversion":"1.0","id":"1e409686-8db8-4402-a99b-a601d838d8e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8ebb8d93-8dcc-46ad-beef-1992040828eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ab95e26d-6f7d-4eeb-ad64-bfe23939bf32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-316893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-316893
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.74s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-314949 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-314949 --driver=kvm2  --container-runtime=crio: (43.742695714s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-330324 --driver=kvm2  --container-runtime=crio
E0408 18:50:19.904973  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-330324 --driver=kvm2  --container-runtime=crio: (41.941412149s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-314949
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-330324
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-330324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-330324
helpers_test.go:175: Cleaning up "first-314949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-314949
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-314949: (1.060974563s)
--- PASS: TestMinikubeProfile (88.74s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-774360 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-774360 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.600358973s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-774360 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-774360 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-795205 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-795205 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.341924553s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-795205 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-795205 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.95s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-774360 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-795205 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-795205 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-795205
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-795205: (1.294249692s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-795205
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-795205: (22.142369275s)
--- PASS: TestMountStart/serial/RestartStopped (23.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-795205 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-795205 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-481713 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0408 18:53:22.983529  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:53:30.238408  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-481713 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.158807367s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-481713 -- rollout status deployment/busybox: (5.021533616s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- exec busybox-58667487b6-6kcqj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- exec busybox-58667487b6-v5g5j -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- exec busybox-58667487b6-6kcqj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- exec busybox-58667487b6-v5g5j -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- exec busybox-58667487b6-6kcqj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- exec busybox-58667487b6-v5g5j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- exec busybox-58667487b6-6kcqj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- exec busybox-58667487b6-6kcqj -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- exec busybox-58667487b6-v5g5j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-481713 -- exec busybox-58667487b6-v5g5j -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-481713 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-481713 -v 3 --alsologtostderr: (47.341251744s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-481713 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 cp testdata/cp-test.txt multinode-481713:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 cp multinode-481713:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile704530672/001/cp-test_multinode-481713.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 cp multinode-481713:/home/docker/cp-test.txt multinode-481713-m02:/home/docker/cp-test_multinode-481713_multinode-481713-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713-m02 "sudo cat /home/docker/cp-test_multinode-481713_multinode-481713-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 cp multinode-481713:/home/docker/cp-test.txt multinode-481713-m03:/home/docker/cp-test_multinode-481713_multinode-481713-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713-m03 "sudo cat /home/docker/cp-test_multinode-481713_multinode-481713-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 cp testdata/cp-test.txt multinode-481713-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 cp multinode-481713-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile704530672/001/cp-test_multinode-481713-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 cp multinode-481713-m02:/home/docker/cp-test.txt multinode-481713:/home/docker/cp-test_multinode-481713-m02_multinode-481713.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713 "sudo cat /home/docker/cp-test_multinode-481713-m02_multinode-481713.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 cp multinode-481713-m02:/home/docker/cp-test.txt multinode-481713-m03:/home/docker/cp-test_multinode-481713-m02_multinode-481713-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713-m03 "sudo cat /home/docker/cp-test_multinode-481713-m02_multinode-481713-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 cp testdata/cp-test.txt multinode-481713-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 cp multinode-481713-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile704530672/001/cp-test_multinode-481713-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 cp multinode-481713-m03:/home/docker/cp-test.txt multinode-481713:/home/docker/cp-test_multinode-481713-m03_multinode-481713.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713 "sudo cat /home/docker/cp-test_multinode-481713-m03_multinode-481713.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 cp multinode-481713-m03:/home/docker/cp-test.txt multinode-481713-m02:/home/docker/cp-test_multinode-481713-m03_multinode-481713-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 ssh -n multinode-481713-m02 "sudo cat /home/docker/cp-test_multinode-481713-m03_multinode-481713-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-481713 node stop m03: (1.418325466s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-481713 status: exit status 7 (452.142996ms)

                                                
                                                
-- stdout --
	multinode-481713
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-481713-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-481713-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-481713 status --alsologtostderr: exit status 7 (450.82178ms)

                                                
                                                
-- stdout --
	multinode-481713
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-481713-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-481713-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 18:54:56.484699  175020 out.go:345] Setting OutFile to fd 1 ...
	I0408 18:54:56.484836  175020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:54:56.484846  175020 out.go:358] Setting ErrFile to fd 2...
	I0408 18:54:56.484852  175020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 18:54:56.485077  175020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 18:54:56.485294  175020 out.go:352] Setting JSON to false
	I0408 18:54:56.485335  175020 mustload.go:65] Loading cluster: multinode-481713
	I0408 18:54:56.485440  175020 notify.go:220] Checking for updates...
	I0408 18:54:56.485808  175020 config.go:182] Loaded profile config "multinode-481713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 18:54:56.485864  175020 status.go:174] checking status of multinode-481713 ...
	I0408 18:54:56.486370  175020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:54:56.486510  175020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:54:56.505439  175020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36407
	I0408 18:54:56.505970  175020 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:54:56.506560  175020 main.go:141] libmachine: Using API Version  1
	I0408 18:54:56.506588  175020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:54:56.507042  175020 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:54:56.507339  175020 main.go:141] libmachine: (multinode-481713) Calling .GetState
	I0408 18:54:56.509034  175020 status.go:371] multinode-481713 host status = "Running" (err=<nil>)
	I0408 18:54:56.509061  175020 host.go:66] Checking if "multinode-481713" exists ...
	I0408 18:54:56.509480  175020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:54:56.509528  175020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:54:56.525933  175020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37783
	I0408 18:54:56.526426  175020 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:54:56.526962  175020 main.go:141] libmachine: Using API Version  1
	I0408 18:54:56.526998  175020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:54:56.527383  175020 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:54:56.527630  175020 main.go:141] libmachine: (multinode-481713) Calling .GetIP
	I0408 18:54:56.530947  175020 main.go:141] libmachine: (multinode-481713) DBG | domain multinode-481713 has defined MAC address 52:54:00:9b:f4:ca in network mk-multinode-481713
	I0408 18:54:56.531389  175020 main.go:141] libmachine: (multinode-481713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:ca", ip: ""} in network mk-multinode-481713: {Iface:virbr1 ExpiryTime:2025-04-08 19:52:11 +0000 UTC Type:0 Mac:52:54:00:9b:f4:ca Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-481713 Clientid:01:52:54:00:9b:f4:ca}
	I0408 18:54:56.531430  175020 main.go:141] libmachine: (multinode-481713) DBG | domain multinode-481713 has defined IP address 192.168.39.21 and MAC address 52:54:00:9b:f4:ca in network mk-multinode-481713
	I0408 18:54:56.531551  175020 host.go:66] Checking if "multinode-481713" exists ...
	I0408 18:54:56.531885  175020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:54:56.531951  175020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:54:56.548039  175020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34325
	I0408 18:54:56.548921  175020 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:54:56.549646  175020 main.go:141] libmachine: Using API Version  1
	I0408 18:54:56.549691  175020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:54:56.550352  175020 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:54:56.550888  175020 main.go:141] libmachine: (multinode-481713) Calling .DriverName
	I0408 18:54:56.551123  175020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:54:56.551147  175020 main.go:141] libmachine: (multinode-481713) Calling .GetSSHHostname
	I0408 18:54:56.554638  175020 main.go:141] libmachine: (multinode-481713) DBG | domain multinode-481713 has defined MAC address 52:54:00:9b:f4:ca in network mk-multinode-481713
	I0408 18:54:56.555218  175020 main.go:141] libmachine: (multinode-481713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f4:ca", ip: ""} in network mk-multinode-481713: {Iface:virbr1 ExpiryTime:2025-04-08 19:52:11 +0000 UTC Type:0 Mac:52:54:00:9b:f4:ca Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:multinode-481713 Clientid:01:52:54:00:9b:f4:ca}
	I0408 18:54:56.555259  175020 main.go:141] libmachine: (multinode-481713) DBG | domain multinode-481713 has defined IP address 192.168.39.21 and MAC address 52:54:00:9b:f4:ca in network mk-multinode-481713
	I0408 18:54:56.555484  175020 main.go:141] libmachine: (multinode-481713) Calling .GetSSHPort
	I0408 18:54:56.555825  175020 main.go:141] libmachine: (multinode-481713) Calling .GetSSHKeyPath
	I0408 18:54:56.556005  175020 main.go:141] libmachine: (multinode-481713) Calling .GetSSHUsername
	I0408 18:54:56.556193  175020 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/multinode-481713/id_rsa Username:docker}
	I0408 18:54:56.642021  175020 ssh_runner.go:195] Run: systemctl --version
	I0408 18:54:56.648580  175020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:54:56.664790  175020 kubeconfig.go:125] found "multinode-481713" server: "https://192.168.39.21:8443"
	I0408 18:54:56.664838  175020 api_server.go:166] Checking apiserver status ...
	I0408 18:54:56.664885  175020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 18:54:56.679703  175020 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1076/cgroup
	W0408 18:54:56.690275  175020 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1076/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 18:54:56.690346  175020 ssh_runner.go:195] Run: ls
	I0408 18:54:56.695278  175020 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0408 18:54:56.699679  175020 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0408 18:54:56.699712  175020 status.go:463] multinode-481713 apiserver status = Running (err=<nil>)
	I0408 18:54:56.699722  175020 status.go:176] multinode-481713 status: &{Name:multinode-481713 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:54:56.699736  175020 status.go:174] checking status of multinode-481713-m02 ...
	I0408 18:54:56.700127  175020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:54:56.700177  175020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:54:56.716711  175020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0408 18:54:56.717280  175020 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:54:56.717719  175020 main.go:141] libmachine: Using API Version  1
	I0408 18:54:56.717742  175020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:54:56.718147  175020 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:54:56.718437  175020 main.go:141] libmachine: (multinode-481713-m02) Calling .GetState
	I0408 18:54:56.720566  175020 status.go:371] multinode-481713-m02 host status = "Running" (err=<nil>)
	I0408 18:54:56.720589  175020 host.go:66] Checking if "multinode-481713-m02" exists ...
	I0408 18:54:56.720965  175020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:54:56.721014  175020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:54:56.737933  175020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45753
	I0408 18:54:56.738364  175020 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:54:56.738805  175020 main.go:141] libmachine: Using API Version  1
	I0408 18:54:56.738827  175020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:54:56.739226  175020 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:54:56.739436  175020 main.go:141] libmachine: (multinode-481713-m02) Calling .GetIP
	I0408 18:54:56.743008  175020 main.go:141] libmachine: (multinode-481713-m02) DBG | domain multinode-481713-m02 has defined MAC address 52:54:00:27:de:f7 in network mk-multinode-481713
	I0408 18:54:56.743565  175020 main.go:141] libmachine: (multinode-481713-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:de:f7", ip: ""} in network mk-multinode-481713: {Iface:virbr1 ExpiryTime:2025-04-08 19:53:14 +0000 UTC Type:0 Mac:52:54:00:27:de:f7 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-481713-m02 Clientid:01:52:54:00:27:de:f7}
	I0408 18:54:56.743592  175020 main.go:141] libmachine: (multinode-481713-m02) DBG | domain multinode-481713-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:27:de:f7 in network mk-multinode-481713
	I0408 18:54:56.743960  175020 host.go:66] Checking if "multinode-481713-m02" exists ...
	I0408 18:54:56.744280  175020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:54:56.744333  175020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:54:56.762067  175020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43823
	I0408 18:54:56.762687  175020 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:54:56.763279  175020 main.go:141] libmachine: Using API Version  1
	I0408 18:54:56.763309  175020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:54:56.763755  175020 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:54:56.763974  175020 main.go:141] libmachine: (multinode-481713-m02) Calling .DriverName
	I0408 18:54:56.764251  175020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 18:54:56.764281  175020 main.go:141] libmachine: (multinode-481713-m02) Calling .GetSSHHostname
	I0408 18:54:56.768521  175020 main.go:141] libmachine: (multinode-481713-m02) DBG | domain multinode-481713-m02 has defined MAC address 52:54:00:27:de:f7 in network mk-multinode-481713
	I0408 18:54:56.769149  175020 main.go:141] libmachine: (multinode-481713-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:de:f7", ip: ""} in network mk-multinode-481713: {Iface:virbr1 ExpiryTime:2025-04-08 19:53:14 +0000 UTC Type:0 Mac:52:54:00:27:de:f7 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:multinode-481713-m02 Clientid:01:52:54:00:27:de:f7}
	I0408 18:54:56.769192  175020 main.go:141] libmachine: (multinode-481713-m02) DBG | domain multinode-481713-m02 has defined IP address 192.168.39.92 and MAC address 52:54:00:27:de:f7 in network mk-multinode-481713
	I0408 18:54:56.769361  175020 main.go:141] libmachine: (multinode-481713-m02) Calling .GetSSHPort
	I0408 18:54:56.769554  175020 main.go:141] libmachine: (multinode-481713-m02) Calling .GetSSHKeyPath
	I0408 18:54:56.769798  175020 main.go:141] libmachine: (multinode-481713-m02) Calling .GetSSHUsername
	I0408 18:54:56.769977  175020 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-141129/.minikube/machines/multinode-481713-m02/id_rsa Username:docker}
	I0408 18:54:56.849012  175020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 18:54:56.863448  175020 status.go:176] multinode-481713-m02 status: &{Name:multinode-481713-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0408 18:54:56.863491  175020 status.go:174] checking status of multinode-481713-m03 ...
	I0408 18:54:56.863946  175020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 18:54:56.864032  175020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 18:54:56.880104  175020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36065
	I0408 18:54:56.880584  175020 main.go:141] libmachine: () Calling .GetVersion
	I0408 18:54:56.881075  175020 main.go:141] libmachine: Using API Version  1
	I0408 18:54:56.881096  175020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 18:54:56.881458  175020 main.go:141] libmachine: () Calling .GetMachineName
	I0408 18:54:56.881649  175020 main.go:141] libmachine: (multinode-481713-m03) Calling .GetState
	I0408 18:54:56.883309  175020 status.go:371] multinode-481713-m03 host status = "Stopped" (err=<nil>)
	I0408 18:54:56.883325  175020 status.go:384] host is not running, skipping remaining checks
	I0408 18:54:56.883331  175020 status.go:176] multinode-481713-m03 status: &{Name:multinode-481713-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 node start m03 -v=7 --alsologtostderr
E0408 18:55:19.902119  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-481713 node start m03 -v=7 --alsologtostderr: (39.710438448s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (344.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-481713
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-481713
E0408 18:58:30.238487  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-481713: (3m3.422649056s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-481713 --wait=true -v=8 --alsologtostderr
E0408 19:00:19.902104  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-481713 --wait=true -v=8 --alsologtostderr: (2m41.394994322s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-481713
--- PASS: TestMultiNode/serial/RestartKeepsNodes (344.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-481713 node delete m03: (2.092654681s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 stop
E0408 19:01:33.309316  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:03:30.238800  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-481713 stop: (3m1.797883788s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-481713 status: exit status 7 (100.208634ms)

                                                
                                                
-- stdout --
	multinode-481713
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-481713-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-481713 status --alsologtostderr: exit status 7 (98.730777ms)

                                                
                                                
-- stdout --
	multinode-481713
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-481713-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 19:04:26.840998  178083 out.go:345] Setting OutFile to fd 1 ...
	I0408 19:04:26.841142  178083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:04:26.841148  178083 out.go:358] Setting ErrFile to fd 2...
	I0408 19:04:26.841152  178083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:04:26.841383  178083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 19:04:26.841593  178083 out.go:352] Setting JSON to false
	I0408 19:04:26.841631  178083 mustload.go:65] Loading cluster: multinode-481713
	I0408 19:04:26.842328  178083 notify.go:220] Checking for updates...
	I0408 19:04:26.842880  178083 config.go:182] Loaded profile config "multinode-481713": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:04:26.842931  178083 status.go:174] checking status of multinode-481713 ...
	I0408 19:04:26.844136  178083 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:04:26.844214  178083 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:04:26.861199  178083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40285
	I0408 19:04:26.861747  178083 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:04:26.862376  178083 main.go:141] libmachine: Using API Version  1
	I0408 19:04:26.862406  178083 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:04:26.863098  178083 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:04:26.863408  178083 main.go:141] libmachine: (multinode-481713) Calling .GetState
	I0408 19:04:26.865584  178083 status.go:371] multinode-481713 host status = "Stopped" (err=<nil>)
	I0408 19:04:26.865611  178083 status.go:384] host is not running, skipping remaining checks
	I0408 19:04:26.865617  178083 status.go:176] multinode-481713 status: &{Name:multinode-481713 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 19:04:26.865661  178083 status.go:174] checking status of multinode-481713-m02 ...
	I0408 19:04:26.866119  178083 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 19:04:26.866205  178083 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 19:04:26.883383  178083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39687
	I0408 19:04:26.883993  178083 main.go:141] libmachine: () Calling .GetVersion
	I0408 19:04:26.884773  178083 main.go:141] libmachine: Using API Version  1
	I0408 19:04:26.884811  178083 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 19:04:26.885340  178083 main.go:141] libmachine: () Calling .GetMachineName
	I0408 19:04:26.885610  178083 main.go:141] libmachine: (multinode-481713-m02) Calling .GetState
	I0408 19:04:26.887808  178083 status.go:371] multinode-481713-m02 host status = "Stopped" (err=<nil>)
	I0408 19:04:26.887833  178083 status.go:384] host is not running, skipping remaining checks
	I0408 19:04:26.887840  178083 status.go:176] multinode-481713-m02 status: &{Name:multinode-481713-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (117.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-481713 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0408 19:05:19.901620  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-481713 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m57.062328576s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-481713 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (117.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-481713
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-481713-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-481713-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (84.684394ms)

                                                
                                                
-- stdout --
	* [multinode-481713-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20604
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-481713-m02' is duplicated with machine name 'multinode-481713-m02' in profile 'multinode-481713'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-481713-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-481713-m03 --driver=kvm2  --container-runtime=crio: (45.800461207s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-481713
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-481713: exit status 80 (229.630825ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-481713 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-481713-m03 already exists in multinode-481713-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-481713-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-481713-m03: (1.085665751s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.26s)

                                                
                                    
x
+
TestScheduledStopUnix (120.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-729577 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-729577 --memory=2048 --driver=kvm2  --container-runtime=crio: (49.120605536s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-729577 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-729577 -n scheduled-stop-729577
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-729577 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0408 19:11:33.354819  148487 retry.go:31] will retry after 66.785µs: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.356026  148487 retry.go:31] will retry after 172.733µs: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.357192  148487 retry.go:31] will retry after 279.775µs: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.358361  148487 retry.go:31] will retry after 395.964µs: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.359519  148487 retry.go:31] will retry after 312.937µs: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.360724  148487 retry.go:31] will retry after 710.277µs: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.361879  148487 retry.go:31] will retry after 1.627961ms: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.364112  148487 retry.go:31] will retry after 2.094006ms: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.367356  148487 retry.go:31] will retry after 1.402436ms: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.369597  148487 retry.go:31] will retry after 5.387213ms: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.375946  148487 retry.go:31] will retry after 4.137988ms: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.381245  148487 retry.go:31] will retry after 5.926001ms: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.387493  148487 retry.go:31] will retry after 18.315654ms: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.406764  148487 retry.go:31] will retry after 23.897707ms: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
I0408 19:11:33.431035  148487 retry.go:31] will retry after 24.813455ms: open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/scheduled-stop-729577/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-729577 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-729577 -n scheduled-stop-729577
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-729577
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-729577 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-729577
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-729577: exit status 7 (81.561497ms)

                                                
                                                
-- stdout --
	scheduled-stop-729577
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-729577 -n scheduled-stop-729577
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-729577 -n scheduled-stop-729577: exit status 7 (79.880575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-729577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-729577
--- PASS: TestScheduledStopUnix (120.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (146.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.378283129 start -p running-upgrade-378868 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.378283129 start -p running-upgrade-378868 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m7.481267288s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-378868 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-378868 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m16.870145368s)
helpers_test.go:175: Cleaning up "running-upgrade-378868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-378868
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-378868: (1.961451455s)
--- PASS: TestRunningBinaryUpgrade (146.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-880875 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-880875 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (131.664284ms)

                                                
                                                
-- stdout --
	* [false-880875] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20604
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 19:12:48.630487  182650 out.go:345] Setting OutFile to fd 1 ...
	I0408 19:12:48.631043  182650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:12:48.631064  182650 out.go:358] Setting ErrFile to fd 2...
	I0408 19:12:48.631071  182650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 19:12:48.631418  182650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-141129/.minikube/bin
	I0408 19:12:48.632295  182650 out.go:352] Setting JSON to false
	I0408 19:12:48.633457  182650 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10514,"bootTime":1744129055,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 19:12:48.633601  182650 start.go:139] virtualization: kvm guest
	I0408 19:12:48.636142  182650 out.go:177] * [false-880875] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 19:12:48.638127  182650 out.go:177]   - MINIKUBE_LOCATION=20604
	I0408 19:12:48.638137  182650 notify.go:220] Checking for updates...
	I0408 19:12:48.641792  182650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 19:12:48.643541  182650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	I0408 19:12:48.645604  182650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	I0408 19:12:48.647365  182650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 19:12:48.649309  182650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 19:12:48.651720  182650 config.go:182] Loaded profile config "force-systemd-flag-042482": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:12:48.651857  182650 config.go:182] Loaded profile config "kubernetes-upgrade-958400": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 19:12:48.651968  182650 config.go:182] Loaded profile config "offline-crio-913064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0408 19:12:48.652070  182650 driver.go:394] Setting default libvirt URI to qemu:///system
	I0408 19:12:48.697014  182650 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 19:12:48.698456  182650 start.go:297] selected driver: kvm2
	I0408 19:12:48.698478  182650 start.go:901] validating driver "kvm2" against <nil>
	I0408 19:12:48.698491  182650 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 19:12:48.701202  182650 out.go:201] 
	W0408 19:12:48.703115  182650 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0408 19:12:48.704681  182650 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-880875 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-880875

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-880875

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-880875

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-880875

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-880875

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-880875

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-880875

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-880875

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-880875

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-880875

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-880875

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-880875" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-880875" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-880875

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880875"

                                                
                                                
----------------------- debugLogs end: false-880875 [took: 3.305806235s] --------------------------------
helpers_test.go:175: Cleaning up "false-880875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-880875
--- PASS: TestNetworkPlugins/group/false (3.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (186.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1985664859 start -p stopped-upgrade-179867 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0408 19:13:30.238869  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1985664859 start -p stopped-upgrade-179867 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m0.776813511s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1985664859 -p stopped-upgrade-179867 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1985664859 -p stopped-upgrade-179867 stop: (2.15888569s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-179867 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0408 19:15:19.902415  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-179867 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.111676012s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (186.05s)

                                                
                                    
x
+
TestPause/serial/Start (79.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-446442 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-446442 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m19.439229845s)
--- PASS: TestPause/serial/Start (79.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.56s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-446442 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-446442 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.528764162s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-179867
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-006114 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-006114 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (79.435664ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-006114] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20604
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20604-141129/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-141129/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-006114 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-006114 --driver=kvm2  --container-runtime=crio: (45.413481454s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-006114 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.71s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-446442 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-446442 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-446442 --output=json --layout=cluster: exit status 2 (326.254269ms)

                                                
                                                
-- stdout --
	{"Name":"pause-446442","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-446442","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-446442 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-446442 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-446442 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-446442 --alsologtostderr -v=5: (1.118693269s)
--- PASS: TestPause/serial/DeletePaused (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.66s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.661573776s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (48.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-006114 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-006114 --no-kubernetes --driver=kvm2  --container-runtime=crio: (47.026057138s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-006114 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-006114 status -o json: exit status 2 (261.975821ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-006114","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-006114
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (48.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-006114 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-006114 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.882872552s)
--- PASS: TestNoKubernetes/serial/Start (29.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-006114 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-006114 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.760485ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-006114
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-006114: (1.304405197s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (62.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-006114 --driver=kvm2  --container-runtime=crio
E0408 19:18:13.311534  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:18:30.238900  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-006114 --driver=kvm2  --container-runtime=crio: (1m2.887280161s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (62.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (106.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m46.941278488s)
--- PASS: TestNetworkPlugins/group/auto/Start (106.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-006114 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-006114 "sudo systemctl is-active --quiet service kubelet": exit status 1 (235.456016ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (98.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m38.706035904s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (98.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (88.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0408 19:20:19.902227  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m28.150575226s)
--- PASS: TestNetworkPlugins/group/calico/Start (88.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-880875 "pgrep -a kubelet"
I0408 19:20:39.381629  148487 config.go:182] Loaded profile config "auto-880875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-880875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qblcq" [9853daf2-8962-4b3f-96f8-9ccfb112535e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-qblcq" [9853daf2-8962-4b3f-96f8-9ccfb112535e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.003991822s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mkr5q" [248d1ff4-09dd-409c-b160-7917b685a014] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005257301s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-880875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-880875 "pgrep -a kubelet"
I0408 19:21:00.374602  148487 config.go:182] Loaded profile config "kindnet-880875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-880875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-h4w6j" [66ecf98e-1747-47f1-8a52-a72c9329ed70] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-h4w6j" [66ecf98e-1747-47f1-8a52-a72c9329ed70] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004286836s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-880875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m12.918223589s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m16.76741824s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (112.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m52.331403809s)
--- PASS: TestNetworkPlugins/group/flannel/Start (112.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zrqvb" [81c31c52-9f55-404d-ab95-b7821b56b019] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00416005s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-880875 "pgrep -a kubelet"
I0408 19:21:40.235424  148487 config.go:182] Loaded profile config "calico-880875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (16.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-880875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zlq77" [779f45a4-68e9-472a-a094-01045011c571] Pending
helpers_test.go:344: "netcat-5d86dc444-zlq77" [779f45a4-68e9-472a-a094-01045011c571] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zlq77" [779f45a4-68e9-472a-a094-01045011c571] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 16.003957555s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (16.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-880875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (99.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-880875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m39.896152812s)
--- PASS: TestNetworkPlugins/group/bridge/Start (99.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-880875 "pgrep -a kubelet"
I0408 19:22:25.677555  148487 config.go:182] Loaded profile config "custom-flannel-880875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-880875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ls5gd" [397281a3-a421-4cae-b7b3-579cf0666eb0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ls5gd" [397281a3-a421-4cae-b7b3-579cf0666eb0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004677116s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-880875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-880875 "pgrep -a kubelet"
I0408 19:22:43.427069  148487 config.go:182] Loaded profile config "enable-default-cni-880875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-880875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-z4gp8" [eea4e891-ca79-45cf-89b6-d79b3e406546] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-z4gp8" [eea4e891-ca79-45cf-89b6-d79b3e406546] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003979287s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-880875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (84.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-552268 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-552268 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m24.227179452s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (84.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tvbkr" [faf5864a-8a58-4d69-8715-ba2f625b4d3e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003972327s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-880875 "pgrep -a kubelet"
I0408 19:23:28.580582  148487 config.go:182] Loaded profile config "flannel-880875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-880875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4g5vw" [e85f927f-8136-4cc2-af35-722f8c4141ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0408 19:23:30.238381  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-4g5vw" [e85f927f-8136-4cc2-af35-722f8c4141ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.004148368s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-880875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-880875 "pgrep -a kubelet"
I0408 19:23:56.473263  148487 config.go:182] Loaded profile config "bridge-880875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-880875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-pp4ws" [b8399a7d-ecf9-4ab8-9e4e-cdabfb91fe78] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-pp4ws" [b8399a7d-ecf9-4ab8-9e4e-cdabfb91fe78] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004321139s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (96.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-787708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-787708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m36.122164006s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (96.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-880875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-880875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E0408 19:33:11.435174  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-171742 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-171742 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m33.784714296s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-552268 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0c15c2b1-4f06-42b4-aff2-4b5ed06457b9] Pending
helpers_test.go:344: "busybox" [0c15c2b1-4f06-42b4-aff2-4b5ed06457b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0c15c2b1-4f06-42b4-aff2-4b5ed06457b9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.003915709s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-552268 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (13.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-552268 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-552268 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-552268 --alsologtostderr -v=3
E0408 19:25:19.901700  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-552268 --alsologtostderr -v=3: (1m31.066582314s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-787708 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [94fd3d3b-a533-443c-9f3e-e4d737310295] Pending
E0408 19:25:39.644481  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:39.650958  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:39.662462  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:39.684036  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:39.725544  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:39.807151  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:39.969091  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [94fd3d3b-a533-443c-9f3e-e4d737310295] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0408 19:25:40.291444  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:40.933860  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:42.215944  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:44.778050  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [94fd3d3b-a533-443c-9f3e-e4d737310295] Running
E0408 19:25:49.899641  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.003317564s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-787708 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-787708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-787708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-787708 --alsologtostderr -v=3
E0408 19:25:54.135926  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:54.142501  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:54.153987  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:54.175470  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:54.217074  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:54.298615  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:54.460380  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:54.782386  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:55.424467  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:25:56.705954  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-787708 --alsologtostderr -v=3: (1m30.840138783s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-171742 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [abc8c88f-f0bb-458f-86ab-11fdf26cc6d7] Pending
helpers_test.go:344: "busybox" [abc8c88f-f0bb-458f-86ab-11fdf26cc6d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0408 19:25:59.267429  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:00.141720  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [abc8c88f-f0bb-458f-86ab-11fdf26cc6d7] Running
E0408 19:26:04.389416  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00396353s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-171742 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-171742 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-171742 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-171742 --alsologtostderr -v=3
E0408 19:26:14.631328  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:20.623339  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-171742 --alsologtostderr -v=3: (1m31.077301058s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-552268 -n no-preload-552268
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-552268 -n no-preload-552268: exit status 7 (75.511412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-552268 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (353.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-552268 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0408 19:26:33.999201  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:34.005711  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:34.017236  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:34.038875  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:34.080373  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:34.162184  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:34.324423  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:34.646312  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:35.113354  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:35.288199  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:36.569553  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:39.131197  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:42.986474  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/addons-835623/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:44.253437  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:26:54.494803  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:01.585786  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:14.976216  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:16.074835  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-552268 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m52.765404966s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-552268 -n no-preload-552268
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (353.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-787708 -n embed-certs-787708
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-787708 -n embed-certs-787708: exit status 7 (77.826742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-787708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (337.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-787708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0408 19:27:25.912358  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:25.918860  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:25.930360  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:25.952383  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:25.993875  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:26.075453  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:26.237044  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:26.558999  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:27.200673  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:28.482055  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-787708 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m36.928507567s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-787708 -n embed-certs-787708
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (337.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171742 -n default-k8s-diff-port-171742
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171742 -n default-k8s-diff-port-171742: exit status 7 (73.623332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-171742 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (311.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-171742 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0408 19:27:43.730822  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:43.737295  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:43.748825  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:43.770533  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:43.812086  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:43.893541  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:44.056100  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:44.378247  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:45.019641  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:46.301728  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:46.407952  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:48.863215  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:53.985603  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:27:55.938372  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/calico-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:04.227505  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:06.889743  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:22.310755  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:22.317438  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:22.329279  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:22.350658  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:22.392099  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:22.474041  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:22.636281  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:22.958512  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:23.507595  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/auto-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:23.600104  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:24.708942  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:24.881671  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:27.442960  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:30.239121  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:32.565012  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:37.996407  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/kindnet-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:42.807170  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:47.851858  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-171742 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m11.357459153s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-171742 -n default-k8s-diff-port-171742
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (311.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-257500 --alsologtostderr -v=3
E0408 19:28:56.746067  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:56.752565  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:56.764079  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:56.785572  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:56.827073  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:28:56.908576  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-257500 --alsologtostderr -v=3: (1.386336169s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500
E0408 19:28:57.070632  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-257500 -n old-k8s-version-257500: exit status 7 (77.895169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-257500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-x2qb4" [e6e40922-a070-4e41-a728-97a80e9bbe03] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005073492s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-x2qb4" [e6e40922-a070-4e41-a728-97a80e9bbe03] Running
E0408 19:32:25.911537  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004258087s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-552268 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-552268 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-552268 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-552268 -n no-preload-552268
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-552268 -n no-preload-552268: exit status 2 (262.153143ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-552268 -n no-preload-552268
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-552268 -n no-preload-552268: exit status 2 (269.149973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-552268 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-552268 -n no-preload-552268
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-552268 -n no-preload-552268
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-574058 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0408 19:32:43.730833  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/enable-default-cni-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-574058 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (53.806678289s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4c7f5" [b8352ff3-7250-4ca5-8396-c2033dba8c5c] Running
E0408 19:32:53.615423  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/custom-flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004119754s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4c7f5" [b8352ff3-7250-4ca5-8396-c2033dba8c5c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005445339s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-171742 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-28pft" [8f155861-2470-4ea5-be02-d635cf657d8e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-28pft" [8f155861-2470-4ea5-be02-d635cf657d8e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.005616672s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-171742 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-171742 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-171742 --alsologtostderr -v=1: (1.145073888s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-171742 -n default-k8s-diff-port-171742
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-171742 -n default-k8s-diff-port-171742: exit status 2 (308.730718ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-171742 -n default-k8s-diff-port-171742
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-171742 -n default-k8s-diff-port-171742: exit status 2 (304.720626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-171742 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-171742 -n default-k8s-diff-port-171742
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-171742 -n default-k8s-diff-port-171742
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-28pft" [8f155861-2470-4ea5-be02-d635cf657d8e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004674016s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-787708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-787708 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-787708 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-787708 -n embed-certs-787708
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-787708 -n embed-certs-787708: exit status 2 (269.751215ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-787708 -n embed-certs-787708
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-787708 -n embed-certs-787708: exit status 2 (270.804897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-787708 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-787708 -n embed-certs-787708
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-787708 -n embed-certs-787708
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-574058 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-574058 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.108193169s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-574058 --alsologtostderr -v=3
E0408 19:33:30.238422  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/functional-391629/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-574058 --alsologtostderr -v=3: (7.347976475s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-574058 -n newest-cni-574058
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-574058 -n newest-cni-574058: exit status 7 (74.9627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-574058 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-574058 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0408 19:33:50.014394  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/flannel-880875/client.crt: no such file or directory" logger="UnhandledError"
E0408 19:33:56.745917  148487 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-141129/.minikube/profiles/bridge-880875/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-574058 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (36.814580968s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-574058 -n newest-cni-574058
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-574058 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-574058 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-574058 -n newest-cni-574058
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-574058 -n newest-cni-574058: exit status 2 (262.784063ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-574058 -n newest-cni-574058
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-574058 -n newest-cni-574058: exit status 2 (273.968317ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-574058 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-574058 -n newest-cni-574058
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-574058 -n newest-cni-574058
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.80s)

                                                
                                    

Test skip (35/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-835623 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-880875 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-880875

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-880875

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-880875

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-880875

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-880875

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-880875

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-880875

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-880875

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-880875

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-880875

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-880875

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-880875" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-880875" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-880875

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880875"

                                                
                                                
----------------------- debugLogs end: kubenet-880875 [took: 3.589008842s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-880875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-880875
--- SKIP: TestNetworkPlugins/group/kubenet (3.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-880875 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-880875" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-880875

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-880875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880875"

                                                
                                                
----------------------- debugLogs end: cilium-880875 [took: 4.086487067s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-880875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-880875
--- SKIP: TestNetworkPlugins/group/cilium (4.26s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-877689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-877689
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard