Test Report: KVM_Linux_crio 20598

                    
                      63c1754226199ce281e4ac8e931674d5ef457043:2025-04-07:39038
                    
                

Test fail (10/321)

x
+
TestAddons/parallel/Ingress (154.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-735249 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-735249 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-735249 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [dff3b731-4ff4-4616-93c3-ecb41f13454c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [dff3b731-4ff4-4616-93c3-ecb41f13454c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.239402953s
I0407 12:58:39.861445  249516 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-735249 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.431434184s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-735249 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.136
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-735249 -n addons-735249
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-735249 logs -n 25: (1.307689013s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-084066                                                                     | download-only-084066 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC | 07 Apr 25 12:55 UTC |
	| delete  | -p download-only-378763                                                                     | download-only-378763 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC | 07 Apr 25 12:55 UTC |
	| delete  | -p download-only-084066                                                                     | download-only-084066 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC | 07 Apr 25 12:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-206431 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC |                     |
	|         | binary-mirror-206431                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37115                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-206431                                                                     | binary-mirror-206431 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC | 07 Apr 25 12:55 UTC |
	| addons  | enable dashboard -p                                                                         | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC |                     |
	|         | addons-735249                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC |                     |
	|         | addons-735249                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-735249 --wait=true                                                                | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC | 07 Apr 25 12:57 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-735249 addons disable                                                                | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:57 UTC | 07 Apr 25 12:57 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-735249 addons disable                                                                | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:57 UTC | 07 Apr 25 12:58 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-735249 addons                                                                        | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:58 UTC | 07 Apr 25 12:58 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-735249 addons disable                                                                | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:58 UTC | 07 Apr 25 12:58 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-735249 addons                                                                        | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:58 UTC | 07 Apr 25 12:58 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:58 UTC | 07 Apr 25 12:58 UTC |
	|         | -p addons-735249                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-735249 ip                                                                            | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:58 UTC | 07 Apr 25 12:58 UTC |
	| addons  | addons-735249 addons disable                                                                | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:58 UTC | 07 Apr 25 12:58 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-735249 ssh cat                                                                       | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:58 UTC | 07 Apr 25 12:58 UTC |
	|         | /opt/local-path-provisioner/pvc-907ff389-84bc-49da-96de-e62e4981b23c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-735249 addons disable                                                                | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:58 UTC | 07 Apr 25 12:58 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-735249 addons                                                                        | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:58 UTC | 07 Apr 25 12:58 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-735249 addons disable                                                                | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:58 UTC | 07 Apr 25 12:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-735249 ssh curl -s                                                                   | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-735249 addons                                                                        | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:58 UTC | 07 Apr 25 12:58 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-735249 addons                                                                        | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-735249 addons                                                                        | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-735249 ip                                                                            | addons-735249        | jenkins | v1.35.0 | 07 Apr 25 13:00 UTC | 07 Apr 25 13:00 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:55:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:55:30.276890  250122 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:55:30.277485  250122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:55:30.277504  250122 out.go:358] Setting ErrFile to fd 2...
	I0407 12:55:30.277512  250122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:55:30.277960  250122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 12:55:30.278952  250122 out.go:352] Setting JSON to false
	I0407 12:55:30.279820  250122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":16677,"bootTime":1744013853,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:55:30.279911  250122 start.go:139] virtualization: kvm guest
	I0407 12:55:30.281412  250122 out.go:177] * [addons-735249] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:55:30.282552  250122 notify.go:220] Checking for updates...
	I0407 12:55:30.282561  250122 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:55:30.283583  250122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:55:30.284542  250122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 12:55:30.285463  250122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 12:55:30.286439  250122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:55:30.287417  250122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:55:30.288630  250122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:55:30.319758  250122 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 12:55:30.320835  250122 start.go:297] selected driver: kvm2
	I0407 12:55:30.320857  250122 start.go:901] validating driver "kvm2" against <nil>
	I0407 12:55:30.320875  250122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:55:30.321608  250122 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:55:30.321706  250122 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 12:55:30.336063  250122 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 12:55:30.336106  250122 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:55:30.336355  250122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:55:30.336394  250122 cni.go:84] Creating CNI manager for ""
	I0407 12:55:30.336472  250122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 12:55:30.336484  250122 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 12:55:30.336536  250122 start.go:340] cluster config:
	{Name:addons-735249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-735249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:55:30.336632  250122 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:55:30.338217  250122 out.go:177] * Starting "addons-735249" primary control-plane node in "addons-735249" cluster
	I0407 12:55:30.339306  250122 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 12:55:30.339346  250122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 12:55:30.339353  250122 cache.go:56] Caching tarball of preloaded images
	I0407 12:55:30.339434  250122 preload.go:172] Found /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 12:55:30.339444  250122 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 12:55:30.339787  250122 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/config.json ...
	I0407 12:55:30.339814  250122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/config.json: {Name:mkdd96ffe9deb5f4933f4d89b97952c84a35f91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:55:30.339946  250122 start.go:360] acquireMachinesLock for addons-735249: {Name:mkbc0d9211b04d7c322a45485d144adcd6ee59fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 12:55:30.339987  250122 start.go:364] duration metric: took 29.785µs to acquireMachinesLock for "addons-735249"
	I0407 12:55:30.340003  250122 start.go:93] Provisioning new machine with config: &{Name:addons-735249 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-735249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 12:55:30.340060  250122 start.go:125] createHost starting for "" (driver="kvm2")
	I0407 12:55:30.341660  250122 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0407 12:55:30.341795  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:55:30.341839  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:55:30.356094  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0407 12:55:30.356619  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:55:30.357132  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:55:30.357155  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:55:30.357551  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:55:30.357743  250122 main.go:141] libmachine: (addons-735249) Calling .GetMachineName
	I0407 12:55:30.357936  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:55:30.358111  250122 start.go:159] libmachine.API.Create for "addons-735249" (driver="kvm2")
	I0407 12:55:30.358154  250122 client.go:168] LocalClient.Create starting
	I0407 12:55:30.358186  250122 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem
	I0407 12:55:30.674344  250122 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem
	I0407 12:55:30.710459  250122 main.go:141] libmachine: Running pre-create checks...
	I0407 12:55:30.710482  250122 main.go:141] libmachine: (addons-735249) Calling .PreCreateCheck
	I0407 12:55:30.710990  250122 main.go:141] libmachine: (addons-735249) Calling .GetConfigRaw
	I0407 12:55:30.711483  250122 main.go:141] libmachine: Creating machine...
	I0407 12:55:30.711508  250122 main.go:141] libmachine: (addons-735249) Calling .Create
	I0407 12:55:30.711687  250122 main.go:141] libmachine: (addons-735249) creating KVM machine...
	I0407 12:55:30.711709  250122 main.go:141] libmachine: (addons-735249) creating network...
	I0407 12:55:30.713077  250122 main.go:141] libmachine: (addons-735249) DBG | found existing default KVM network
	I0407 12:55:30.713773  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:30.713599  250144 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000220dd0}
	I0407 12:55:30.713798  250122 main.go:141] libmachine: (addons-735249) DBG | created network xml: 
	I0407 12:55:30.713811  250122 main.go:141] libmachine: (addons-735249) DBG | <network>
	I0407 12:55:30.713819  250122 main.go:141] libmachine: (addons-735249) DBG |   <name>mk-addons-735249</name>
	I0407 12:55:30.713827  250122 main.go:141] libmachine: (addons-735249) DBG |   <dns enable='no'/>
	I0407 12:55:30.713849  250122 main.go:141] libmachine: (addons-735249) DBG |   
	I0407 12:55:30.713864  250122 main.go:141] libmachine: (addons-735249) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0407 12:55:30.713877  250122 main.go:141] libmachine: (addons-735249) DBG |     <dhcp>
	I0407 12:55:30.713890  250122 main.go:141] libmachine: (addons-735249) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0407 12:55:30.713900  250122 main.go:141] libmachine: (addons-735249) DBG |     </dhcp>
	I0407 12:55:30.713909  250122 main.go:141] libmachine: (addons-735249) DBG |   </ip>
	I0407 12:55:30.713919  250122 main.go:141] libmachine: (addons-735249) DBG |   
	I0407 12:55:30.713928  250122 main.go:141] libmachine: (addons-735249) DBG | </network>
	I0407 12:55:30.713938  250122 main.go:141] libmachine: (addons-735249) DBG | 
	I0407 12:55:30.719122  250122 main.go:141] libmachine: (addons-735249) DBG | trying to create private KVM network mk-addons-735249 192.168.39.0/24...
	I0407 12:55:30.785298  250122 main.go:141] libmachine: (addons-735249) DBG | private KVM network mk-addons-735249 192.168.39.0/24 created
	I0407 12:55:30.785392  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:30.785248  250144 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 12:55:30.785408  250122 main.go:141] libmachine: (addons-735249) setting up store path in /home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249 ...
	I0407 12:55:30.785427  250122 main.go:141] libmachine: (addons-735249) building disk image from file:///home/jenkins/minikube-integration/20598-242355/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 12:55:30.785612  250122 main.go:141] libmachine: (addons-735249) Downloading /home/jenkins/minikube-integration/20598-242355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20598-242355/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 12:55:31.049785  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:31.049634  250144 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa...
	I0407 12:55:31.478258  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:31.478127  250144 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/addons-735249.rawdisk...
	I0407 12:55:31.478306  250122 main.go:141] libmachine: (addons-735249) DBG | Writing magic tar header
	I0407 12:55:31.478316  250122 main.go:141] libmachine: (addons-735249) DBG | Writing SSH key tar header
	I0407 12:55:31.478323  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:31.478264  250144 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249 ...
	I0407 12:55:31.478399  250122 main.go:141] libmachine: (addons-735249) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249
	I0407 12:55:31.478436  250122 main.go:141] libmachine: (addons-735249) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249 (perms=drwx------)
	I0407 12:55:31.478446  250122 main.go:141] libmachine: (addons-735249) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube/machines
	I0407 12:55:31.478453  250122 main.go:141] libmachine: (addons-735249) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube/machines (perms=drwxr-xr-x)
	I0407 12:55:31.478461  250122 main.go:141] libmachine: (addons-735249) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube (perms=drwxr-xr-x)
	I0407 12:55:31.478466  250122 main.go:141] libmachine: (addons-735249) setting executable bit set on /home/jenkins/minikube-integration/20598-242355 (perms=drwxrwxr-x)
	I0407 12:55:31.478474  250122 main.go:141] libmachine: (addons-735249) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0407 12:55:31.478480  250122 main.go:141] libmachine: (addons-735249) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0407 12:55:31.478486  250122 main.go:141] libmachine: (addons-735249) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 12:55:31.478559  250122 main.go:141] libmachine: (addons-735249) creating domain...
	I0407 12:55:31.478597  250122 main.go:141] libmachine: (addons-735249) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355
	I0407 12:55:31.478616  250122 main.go:141] libmachine: (addons-735249) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0407 12:55:31.478625  250122 main.go:141] libmachine: (addons-735249) DBG | checking permissions on dir: /home/jenkins
	I0407 12:55:31.478642  250122 main.go:141] libmachine: (addons-735249) DBG | checking permissions on dir: /home
	I0407 12:55:31.478653  250122 main.go:141] libmachine: (addons-735249) DBG | skipping /home - not owner
	I0407 12:55:31.479718  250122 main.go:141] libmachine: (addons-735249) define libvirt domain using xml: 
	I0407 12:55:31.479739  250122 main.go:141] libmachine: (addons-735249) <domain type='kvm'>
	I0407 12:55:31.479746  250122 main.go:141] libmachine: (addons-735249)   <name>addons-735249</name>
	I0407 12:55:31.479751  250122 main.go:141] libmachine: (addons-735249)   <memory unit='MiB'>4000</memory>
	I0407 12:55:31.479755  250122 main.go:141] libmachine: (addons-735249)   <vcpu>2</vcpu>
	I0407 12:55:31.479761  250122 main.go:141] libmachine: (addons-735249)   <features>
	I0407 12:55:31.479766  250122 main.go:141] libmachine: (addons-735249)     <acpi/>
	I0407 12:55:31.479769  250122 main.go:141] libmachine: (addons-735249)     <apic/>
	I0407 12:55:31.479774  250122 main.go:141] libmachine: (addons-735249)     <pae/>
	I0407 12:55:31.479781  250122 main.go:141] libmachine: (addons-735249)     
	I0407 12:55:31.479785  250122 main.go:141] libmachine: (addons-735249)   </features>
	I0407 12:55:31.479790  250122 main.go:141] libmachine: (addons-735249)   <cpu mode='host-passthrough'>
	I0407 12:55:31.479800  250122 main.go:141] libmachine: (addons-735249)   
	I0407 12:55:31.479807  250122 main.go:141] libmachine: (addons-735249)   </cpu>
	I0407 12:55:31.479811  250122 main.go:141] libmachine: (addons-735249)   <os>
	I0407 12:55:31.479829  250122 main.go:141] libmachine: (addons-735249)     <type>hvm</type>
	I0407 12:55:31.479852  250122 main.go:141] libmachine: (addons-735249)     <boot dev='cdrom'/>
	I0407 12:55:31.479866  250122 main.go:141] libmachine: (addons-735249)     <boot dev='hd'/>
	I0407 12:55:31.479873  250122 main.go:141] libmachine: (addons-735249)     <bootmenu enable='no'/>
	I0407 12:55:31.479877  250122 main.go:141] libmachine: (addons-735249)   </os>
	I0407 12:55:31.479894  250122 main.go:141] libmachine: (addons-735249)   <devices>
	I0407 12:55:31.479902  250122 main.go:141] libmachine: (addons-735249)     <disk type='file' device='cdrom'>
	I0407 12:55:31.479910  250122 main.go:141] libmachine: (addons-735249)       <source file='/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/boot2docker.iso'/>
	I0407 12:55:31.479916  250122 main.go:141] libmachine: (addons-735249)       <target dev='hdc' bus='scsi'/>
	I0407 12:55:31.479921  250122 main.go:141] libmachine: (addons-735249)       <readonly/>
	I0407 12:55:31.479925  250122 main.go:141] libmachine: (addons-735249)     </disk>
	I0407 12:55:31.479930  250122 main.go:141] libmachine: (addons-735249)     <disk type='file' device='disk'>
	I0407 12:55:31.479935  250122 main.go:141] libmachine: (addons-735249)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0407 12:55:31.479943  250122 main.go:141] libmachine: (addons-735249)       <source file='/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/addons-735249.rawdisk'/>
	I0407 12:55:31.479947  250122 main.go:141] libmachine: (addons-735249)       <target dev='hda' bus='virtio'/>
	I0407 12:55:31.479951  250122 main.go:141] libmachine: (addons-735249)     </disk>
	I0407 12:55:31.479955  250122 main.go:141] libmachine: (addons-735249)     <interface type='network'>
	I0407 12:55:31.479960  250122 main.go:141] libmachine: (addons-735249)       <source network='mk-addons-735249'/>
	I0407 12:55:31.479968  250122 main.go:141] libmachine: (addons-735249)       <model type='virtio'/>
	I0407 12:55:31.479974  250122 main.go:141] libmachine: (addons-735249)     </interface>
	I0407 12:55:31.479978  250122 main.go:141] libmachine: (addons-735249)     <interface type='network'>
	I0407 12:55:31.480000  250122 main.go:141] libmachine: (addons-735249)       <source network='default'/>
	I0407 12:55:31.480019  250122 main.go:141] libmachine: (addons-735249)       <model type='virtio'/>
	I0407 12:55:31.480028  250122 main.go:141] libmachine: (addons-735249)     </interface>
	I0407 12:55:31.480043  250122 main.go:141] libmachine: (addons-735249)     <serial type='pty'>
	I0407 12:55:31.480055  250122 main.go:141] libmachine: (addons-735249)       <target port='0'/>
	I0407 12:55:31.480079  250122 main.go:141] libmachine: (addons-735249)     </serial>
	I0407 12:55:31.480091  250122 main.go:141] libmachine: (addons-735249)     <console type='pty'>
	I0407 12:55:31.480100  250122 main.go:141] libmachine: (addons-735249)       <target type='serial' port='0'/>
	I0407 12:55:31.480111  250122 main.go:141] libmachine: (addons-735249)     </console>
	I0407 12:55:31.480120  250122 main.go:141] libmachine: (addons-735249)     <rng model='virtio'>
	I0407 12:55:31.480137  250122 main.go:141] libmachine: (addons-735249)       <backend model='random'>/dev/random</backend>
	I0407 12:55:31.480146  250122 main.go:141] libmachine: (addons-735249)     </rng>
	I0407 12:55:31.480151  250122 main.go:141] libmachine: (addons-735249)     
	I0407 12:55:31.480157  250122 main.go:141] libmachine: (addons-735249)     
	I0407 12:55:31.480162  250122 main.go:141] libmachine: (addons-735249)   </devices>
	I0407 12:55:31.480165  250122 main.go:141] libmachine: (addons-735249) </domain>
	I0407 12:55:31.480174  250122 main.go:141] libmachine: (addons-735249) 
	I0407 12:55:31.484411  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:23:97:66 in network default
	I0407 12:55:31.484989  250122 main.go:141] libmachine: (addons-735249) starting domain...
	I0407 12:55:31.485011  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:31.485019  250122 main.go:141] libmachine: (addons-735249) ensuring networks are active...
	I0407 12:55:31.485696  250122 main.go:141] libmachine: (addons-735249) Ensuring network default is active
	I0407 12:55:31.486034  250122 main.go:141] libmachine: (addons-735249) Ensuring network mk-addons-735249 is active
	I0407 12:55:31.486542  250122 main.go:141] libmachine: (addons-735249) getting domain XML...
	I0407 12:55:31.487225  250122 main.go:141] libmachine: (addons-735249) creating domain...
	I0407 12:55:32.686326  250122 main.go:141] libmachine: (addons-735249) waiting for IP...
	I0407 12:55:32.687101  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:32.687445  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:32.687507  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:32.687465  250144 retry.go:31] will retry after 281.228717ms: waiting for domain to come up
	I0407 12:55:32.969968  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:32.970496  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:32.970554  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:32.970463  250144 retry.go:31] will retry after 287.251072ms: waiting for domain to come up
	I0407 12:55:33.258988  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:33.259471  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:33.259500  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:33.259438  250144 retry.go:31] will retry after 386.864733ms: waiting for domain to come up
	I0407 12:55:33.648032  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:33.648408  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:33.648455  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:33.648358  250144 retry.go:31] will retry after 484.117336ms: waiting for domain to come up
	I0407 12:55:34.134115  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:34.134526  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:34.134555  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:34.134487  250144 retry.go:31] will retry after 661.68664ms: waiting for domain to come up
	I0407 12:55:34.798475  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:34.799002  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:34.799031  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:34.798969  250144 retry.go:31] will retry after 668.265659ms: waiting for domain to come up
	I0407 12:55:35.468283  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:35.468699  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:35.468723  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:35.468679  250144 retry.go:31] will retry after 895.540613ms: waiting for domain to come up
	I0407 12:55:36.365730  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:36.366262  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:36.366292  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:36.366188  250144 retry.go:31] will retry after 924.039391ms: waiting for domain to come up
	I0407 12:55:37.292289  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:37.292700  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:37.292722  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:37.292668  250144 retry.go:31] will retry after 1.800259339s: waiting for domain to come up
	I0407 12:55:39.095630  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:39.096087  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:39.096116  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:39.096039  250144 retry.go:31] will retry after 2.301844766s: waiting for domain to come up
	I0407 12:55:41.399231  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:41.399570  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:41.399591  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:41.399549  250144 retry.go:31] will retry after 2.621881151s: waiting for domain to come up
	I0407 12:55:44.024315  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:44.024648  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:44.024677  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:44.024611  250144 retry.go:31] will retry after 3.396032902s: waiting for domain to come up
	I0407 12:55:47.423652  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:47.423992  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:47.424015  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:47.423962  250144 retry.go:31] will retry after 2.97426397s: waiting for domain to come up
	I0407 12:55:50.399577  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:50.399988  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find current IP address of domain addons-735249 in network mk-addons-735249
	I0407 12:55:50.400013  250122 main.go:141] libmachine: (addons-735249) DBG | I0407 12:55:50.399950  250144 retry.go:31] will retry after 3.849489564s: waiting for domain to come up
	I0407 12:55:54.250622  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:54.251075  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has current primary IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:54.251105  250122 main.go:141] libmachine: (addons-735249) found domain IP: 192.168.39.136
	I0407 12:55:54.251118  250122 main.go:141] libmachine: (addons-735249) reserving static IP address...
	I0407 12:55:54.251431  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find host DHCP lease matching {name: "addons-735249", mac: "52:54:00:e6:43:7d", ip: "192.168.39.136"} in network mk-addons-735249
	I0407 12:55:54.324046  250122 main.go:141] libmachine: (addons-735249) reserved static IP address 192.168.39.136 for domain addons-735249
	I0407 12:55:54.324098  250122 main.go:141] libmachine: (addons-735249) DBG | Getting to WaitForSSH function...
	I0407 12:55:54.324109  250122 main.go:141] libmachine: (addons-735249) waiting for SSH...
	I0407 12:55:54.326693  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:54.327182  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249
	I0407 12:55:54.327208  250122 main.go:141] libmachine: (addons-735249) DBG | unable to find defined IP address of network mk-addons-735249 interface with MAC address 52:54:00:e6:43:7d
	I0407 12:55:54.327284  250122 main.go:141] libmachine: (addons-735249) DBG | Using SSH client type: external
	I0407 12:55:54.327323  250122 main.go:141] libmachine: (addons-735249) DBG | Using SSH private key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa (-rw-------)
	I0407 12:55:54.327357  250122 main.go:141] libmachine: (addons-735249) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 12:55:54.327369  250122 main.go:141] libmachine: (addons-735249) DBG | About to run SSH command:
	I0407 12:55:54.327405  250122 main.go:141] libmachine: (addons-735249) DBG | exit 0
	I0407 12:55:54.331084  250122 main.go:141] libmachine: (addons-735249) DBG | SSH cmd err, output: exit status 255: 
	I0407 12:55:54.331102  250122 main.go:141] libmachine: (addons-735249) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0407 12:55:54.331108  250122 main.go:141] libmachine: (addons-735249) DBG | command : exit 0
	I0407 12:55:54.331113  250122 main.go:141] libmachine: (addons-735249) DBG | err     : exit status 255
	I0407 12:55:54.331132  250122 main.go:141] libmachine: (addons-735249) DBG | output  : 
	I0407 12:55:57.332993  250122 main.go:141] libmachine: (addons-735249) DBG | Getting to WaitForSSH function...
	I0407 12:55:57.335320  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.335944  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:57.335972  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.336160  250122 main.go:141] libmachine: (addons-735249) DBG | Using SSH client type: external
	I0407 12:55:57.336198  250122 main.go:141] libmachine: (addons-735249) DBG | Using SSH private key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa (-rw-------)
	I0407 12:55:57.336267  250122 main.go:141] libmachine: (addons-735249) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 12:55:57.336289  250122 main.go:141] libmachine: (addons-735249) DBG | About to run SSH command:
	I0407 12:55:57.336326  250122 main.go:141] libmachine: (addons-735249) DBG | exit 0
	I0407 12:55:57.464631  250122 main.go:141] libmachine: (addons-735249) DBG | SSH cmd err, output: <nil>: 
	I0407 12:55:57.464916  250122 main.go:141] libmachine: (addons-735249) KVM machine creation complete
	I0407 12:55:57.465262  250122 main.go:141] libmachine: (addons-735249) Calling .GetConfigRaw
	I0407 12:55:57.465870  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:55:57.466068  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:55:57.466232  250122 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0407 12:55:57.466247  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:55:57.467534  250122 main.go:141] libmachine: Detecting operating system of created instance...
	I0407 12:55:57.467557  250122 main.go:141] libmachine: Waiting for SSH to be available...
	I0407 12:55:57.467563  250122 main.go:141] libmachine: Getting to WaitForSSH function...
	I0407 12:55:57.467568  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:55:57.469586  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.469890  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:57.469914  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.470035  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:55:57.470217  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:57.470400  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:57.470529  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:55:57.470693  250122 main.go:141] libmachine: Using SSH client type: native
	I0407 12:55:57.471016  250122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0407 12:55:57.471032  250122 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0407 12:55:57.579586  250122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 12:55:57.579611  250122 main.go:141] libmachine: Detecting the provisioner...
	I0407 12:55:57.579621  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:55:57.582252  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.582596  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:57.582629  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.582761  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:55:57.582959  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:57.583150  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:57.583272  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:55:57.583497  250122 main.go:141] libmachine: Using SSH client type: native
	I0407 12:55:57.583753  250122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0407 12:55:57.583768  250122 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0407 12:55:57.693308  250122 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0407 12:55:57.693414  250122 main.go:141] libmachine: found compatible host: buildroot
	I0407 12:55:57.693428  250122 main.go:141] libmachine: Provisioning with buildroot...
	I0407 12:55:57.693438  250122 main.go:141] libmachine: (addons-735249) Calling .GetMachineName
	I0407 12:55:57.693729  250122 buildroot.go:166] provisioning hostname "addons-735249"
	I0407 12:55:57.693759  250122 main.go:141] libmachine: (addons-735249) Calling .GetMachineName
	I0407 12:55:57.693931  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:55:57.696611  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.696912  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:57.696935  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.696999  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:55:57.697235  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:57.697370  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:57.697489  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:55:57.697747  250122 main.go:141] libmachine: Using SSH client type: native
	I0407 12:55:57.697971  250122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0407 12:55:57.697991  250122 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-735249 && echo "addons-735249" | sudo tee /etc/hostname
	I0407 12:55:57.819250  250122 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-735249
	
	I0407 12:55:57.819286  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:55:57.822098  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.822445  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:57.822470  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.822644  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:55:57.822856  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:57.823017  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:57.823130  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:55:57.823288  250122 main.go:141] libmachine: Using SSH client type: native
	I0407 12:55:57.823505  250122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0407 12:55:57.823526  250122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-735249' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-735249/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-735249' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 12:55:57.941423  250122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 12:55:57.941462  250122 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20598-242355/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-242355/.minikube}
	I0407 12:55:57.941508  250122 buildroot.go:174] setting up certificates
	I0407 12:55:57.941526  250122 provision.go:84] configureAuth start
	I0407 12:55:57.941542  250122 main.go:141] libmachine: (addons-735249) Calling .GetMachineName
	I0407 12:55:57.941867  250122 main.go:141] libmachine: (addons-735249) Calling .GetIP
	I0407 12:55:57.944671  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.944991  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:57.945010  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.945162  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:55:57.947178  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.947511  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:57.947538  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:57.947654  250122 provision.go:143] copyHostCerts
	I0407 12:55:57.947748  250122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem (1123 bytes)
	I0407 12:55:57.947901  250122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem (1679 bytes)
	I0407 12:55:57.947977  250122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem (1078 bytes)
	I0407 12:55:57.948056  250122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem org=jenkins.addons-735249 san=[127.0.0.1 192.168.39.136 addons-735249 localhost minikube]
	I0407 12:55:58.255003  250122 provision.go:177] copyRemoteCerts
	I0407 12:55:58.255081  250122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 12:55:58.255107  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:55:58.257661  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.258015  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:58.258050  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.258210  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:55:58.258435  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:58.258594  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:55:58.258735  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:55:58.343020  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 12:55:58.366700  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 12:55:58.389177  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 12:55:58.411345  250122 provision.go:87] duration metric: took 469.802888ms to configureAuth
	I0407 12:55:58.411373  250122 buildroot.go:189] setting minikube options for container-runtime
	I0407 12:55:58.411574  250122 config.go:182] Loaded profile config "addons-735249": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:55:58.411683  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:55:58.414344  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.414637  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:58.414659  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.414938  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:55:58.415136  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:58.415314  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:58.415475  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:55:58.415633  250122 main.go:141] libmachine: Using SSH client type: native
	I0407 12:55:58.415919  250122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0407 12:55:58.415940  250122 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 12:55:58.649012  250122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 12:55:58.649052  250122 main.go:141] libmachine: Checking connection to Docker...
	I0407 12:55:58.649064  250122 main.go:141] libmachine: (addons-735249) Calling .GetURL
	I0407 12:55:58.650404  250122 main.go:141] libmachine: (addons-735249) DBG | using libvirt version 6000000
	I0407 12:55:58.652730  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.653064  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:58.653094  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.653208  250122 main.go:141] libmachine: Docker is up and running!
	I0407 12:55:58.653223  250122 main.go:141] libmachine: Reticulating splines...
	I0407 12:55:58.653231  250122 client.go:171] duration metric: took 28.295066811s to LocalClient.Create
	I0407 12:55:58.653255  250122 start.go:167] duration metric: took 28.295147933s to libmachine.API.Create "addons-735249"
	I0407 12:55:58.653265  250122 start.go:293] postStartSetup for "addons-735249" (driver="kvm2")
	I0407 12:55:58.653277  250122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 12:55:58.653308  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:55:58.653543  250122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 12:55:58.653581  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:55:58.655831  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.656109  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:58.656146  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.656283  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:55:58.656493  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:58.656642  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:55:58.656777  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:55:58.743212  250122 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 12:55:58.747692  250122 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 12:55:58.747728  250122 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/addons for local assets ...
	I0407 12:55:58.747795  250122 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/files for local assets ...
	I0407 12:55:58.747818  250122 start.go:296] duration metric: took 94.544731ms for postStartSetup
	I0407 12:55:58.747859  250122 main.go:141] libmachine: (addons-735249) Calling .GetConfigRaw
	I0407 12:55:58.748447  250122 main.go:141] libmachine: (addons-735249) Calling .GetIP
	I0407 12:55:58.751043  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.751419  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:58.751443  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.751796  250122 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/config.json ...
	I0407 12:55:58.751988  250122 start.go:128] duration metric: took 28.411916432s to createHost
	I0407 12:55:58.752016  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:55:58.754308  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.754623  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:58.754654  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.754740  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:55:58.754935  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:58.755103  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:58.755215  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:55:58.755383  250122 main.go:141] libmachine: Using SSH client type: native
	I0407 12:55:58.755579  250122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0407 12:55:58.755589  250122 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 12:55:58.865704  250122 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744030558.845316805
	
	I0407 12:55:58.865735  250122 fix.go:216] guest clock: 1744030558.845316805
	I0407 12:55:58.865745  250122 fix.go:229] Guest: 2025-04-07 12:55:58.845316805 +0000 UTC Remote: 2025-04-07 12:55:58.752000151 +0000 UTC m=+28.510069010 (delta=93.316654ms)
	I0407 12:55:58.865773  250122 fix.go:200] guest clock delta is within tolerance: 93.316654ms
	I0407 12:55:58.865779  250122 start.go:83] releasing machines lock for "addons-735249", held for 28.525783126s
	I0407 12:55:58.865835  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:55:58.866182  250122 main.go:141] libmachine: (addons-735249) Calling .GetIP
	I0407 12:55:58.868795  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.869208  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:58.869239  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.869378  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:55:58.869885  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:55:58.870080  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:55:58.870221  250122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 12:55:58.870283  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:55:58.870312  250122 ssh_runner.go:195] Run: cat /version.json
	I0407 12:55:58.870338  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:55:58.872903  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.873004  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.873352  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:58.873381  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.873409  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:55:58.873427  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:55:58.873490  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:55:58.873667  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:58.873758  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:55:58.873850  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:55:58.873949  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:55:58.874047  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:55:58.874051  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:55:58.874193  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:55:58.977857  250122 ssh_runner.go:195] Run: systemctl --version
	I0407 12:55:58.985942  250122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 12:55:59.247816  250122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 12:55:59.254090  250122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 12:55:59.254157  250122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 12:55:59.271035  250122 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 12:55:59.271069  250122 start.go:495] detecting cgroup driver to use...
	I0407 12:55:59.271137  250122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 12:55:59.291636  250122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 12:55:59.308971  250122 docker.go:217] disabling cri-docker service (if available) ...
	I0407 12:55:59.309056  250122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 12:55:59.324289  250122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 12:55:59.338605  250122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 12:55:59.458449  250122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 12:55:59.604323  250122 docker.go:233] disabling docker service ...
	I0407 12:55:59.604455  250122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 12:55:59.619933  250122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 12:55:59.633973  250122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 12:55:59.774427  250122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 12:55:59.907081  250122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 12:55:59.920930  250122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:55:59.939143  250122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 12:55:59.939207  250122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:55:59.949566  250122 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 12:55:59.949629  250122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:55:59.959896  250122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:55:59.970259  250122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:55:59.980395  250122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 12:55:59.990791  250122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:56:00.000946  250122 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:56:00.017835  250122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 12:56:00.028130  250122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 12:56:00.037720  250122 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 12:56:00.037781  250122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 12:56:00.050412  250122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 12:56:00.060149  250122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:56:00.190829  250122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 12:56:00.280024  250122 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 12:56:00.280154  250122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 12:56:00.284664  250122 start.go:563] Will wait 60s for crictl version
	I0407 12:56:00.284756  250122 ssh_runner.go:195] Run: which crictl
	I0407 12:56:00.288463  250122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 12:56:00.326822  250122 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 12:56:00.326914  250122 ssh_runner.go:195] Run: crio --version
	I0407 12:56:00.353929  250122 ssh_runner.go:195] Run: crio --version
	I0407 12:56:00.381567  250122 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 12:56:00.382854  250122 main.go:141] libmachine: (addons-735249) Calling .GetIP
	I0407 12:56:00.385502  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:00.385900  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:00.385930  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:00.386122  250122 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0407 12:56:00.390193  250122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:56:00.402214  250122 kubeadm.go:883] updating cluster {Name:addons-735249 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-735249 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 12:56:00.402333  250122 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 12:56:00.402379  250122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 12:56:00.433266  250122 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0407 12:56:00.433332  250122 ssh_runner.go:195] Run: which lz4
	I0407 12:56:00.437153  250122 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 12:56:00.441185  250122 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 12:56:00.441208  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0407 12:56:01.719563  250122 crio.go:462] duration metric: took 1.28242954s to copy over tarball
	I0407 12:56:01.719660  250122 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 12:56:03.851366  250122 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.131644002s)
	I0407 12:56:03.851402  250122 crio.go:469] duration metric: took 2.131791859s to extract the tarball
	I0407 12:56:03.851410  250122 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 12:56:03.889461  250122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 12:56:03.931616  250122 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 12:56:03.931645  250122 cache_images.go:84] Images are preloaded, skipping loading
	I0407 12:56:03.931655  250122 kubeadm.go:934] updating node { 192.168.39.136 8443 v1.32.2 crio true true} ...
	I0407 12:56:03.931808  250122 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-735249 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-735249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 12:56:03.931908  250122 ssh_runner.go:195] Run: crio config
	I0407 12:56:03.975127  250122 cni.go:84] Creating CNI manager for ""
	I0407 12:56:03.975168  250122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 12:56:03.975184  250122 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 12:56:03.975214  250122 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.136 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-735249 NodeName:addons-735249 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 12:56:03.975353  250122 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-735249"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.136"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.136"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 12:56:03.975429  250122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 12:56:03.985479  250122 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 12:56:03.985557  250122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 12:56:03.994859  250122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0407 12:56:04.011346  250122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 12:56:04.027580  250122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0407 12:56:04.043910  250122 ssh_runner.go:195] Run: grep 192.168.39.136	control-plane.minikube.internal$ /etc/hosts
	I0407 12:56:04.047981  250122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:56:04.060110  250122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:56:04.180016  250122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 12:56:04.197451  250122 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249 for IP: 192.168.39.136
	I0407 12:56:04.197477  250122 certs.go:194] generating shared ca certs ...
	I0407 12:56:04.197498  250122 certs.go:226] acquiring lock for ca certs: {Name:mk1da0e2436b5b22d130d00c7c348c272ee34f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:04.197655  250122 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key
	I0407 12:56:04.720863  250122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt ...
	I0407 12:56:04.720898  250122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt: {Name:mkbd555abbcd715f7150db5031d0f34c2ffe6296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:04.721098  250122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key ...
	I0407 12:56:04.721115  250122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key: {Name:mk4c87d68b55fe7d09446424a49af513cc55a701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:04.721224  250122 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key
	I0407 12:56:04.990597  250122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.crt ...
	I0407 12:56:04.990634  250122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.crt: {Name:mkecaa2fee72391aa9504ad7d3e182c1d8f2b327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:04.990838  250122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key ...
	I0407 12:56:04.990854  250122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key: {Name:mk049f80e4cecc79d7af9835edf14790e8d99a54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:04.990969  250122 certs.go:256] generating profile certs ...
	I0407 12:56:04.991065  250122 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.key
	I0407 12:56:04.991085  250122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt with IP's: []
	I0407 12:56:05.035900  250122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt ...
	I0407 12:56:05.035938  250122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: {Name:mk51ee8bdabb137dc36ece9ae835600cc661d1d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:05.036126  250122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.key ...
	I0407 12:56:05.036143  250122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.key: {Name:mk9d0adfd110e2b6a2410da714d3b04681659602 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:05.036245  250122 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/apiserver.key.37bbb53b
	I0407 12:56:05.036268  250122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/apiserver.crt.37bbb53b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.136]
	I0407 12:56:05.115743  250122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/apiserver.crt.37bbb53b ...
	I0407 12:56:05.115791  250122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/apiserver.crt.37bbb53b: {Name:mk1e420ec0f4969d9c917d588120045d6bb437ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:05.116042  250122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/apiserver.key.37bbb53b ...
	I0407 12:56:05.116067  250122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/apiserver.key.37bbb53b: {Name:mk5695c855af4275684d9df61433ac7310139fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:05.116227  250122 certs.go:381] copying /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/apiserver.crt.37bbb53b -> /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/apiserver.crt
	I0407 12:56:05.116381  250122 certs.go:385] copying /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/apiserver.key.37bbb53b -> /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/apiserver.key
	I0407 12:56:05.116498  250122 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/proxy-client.key
	I0407 12:56:05.116532  250122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/proxy-client.crt with IP's: []
	I0407 12:56:05.168668  250122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/proxy-client.crt ...
	I0407 12:56:05.168707  250122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/proxy-client.crt: {Name:mk3e8cfd715ad3020fa48c569f6fa3f85236dd36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:05.168901  250122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/proxy-client.key ...
	I0407 12:56:05.168922  250122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/proxy-client.key: {Name:mke0dc7593e242e76855c50a39cef61612a81e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:05.169147  250122 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 12:56:05.169201  250122 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem (1078 bytes)
	I0407 12:56:05.169240  250122 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem (1123 bytes)
	I0407 12:56:05.169276  250122 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem (1679 bytes)
	I0407 12:56:05.170068  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 12:56:05.195764  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 12:56:05.221581  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 12:56:05.248673  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 12:56:05.277607  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0407 12:56:05.303268  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 12:56:05.329330  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 12:56:05.353207  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 12:56:05.380569  250122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 12:56:05.404867  250122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 12:56:05.421338  250122 ssh_runner.go:195] Run: openssl version
	I0407 12:56:05.427171  250122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 12:56:05.438487  250122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:56:05.443121  250122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:56:05.443187  250122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:56:05.449038  250122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 12:56:05.459809  250122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 12:56:05.464170  250122 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 12:56:05.464227  250122 kubeadm.go:392] StartCluster: {Name:addons-735249 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-735249 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:56:05.464313  250122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 12:56:05.464370  250122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 12:56:05.508406  250122 cri.go:89] found id: ""
	I0407 12:56:05.508501  250122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 12:56:05.518649  250122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 12:56:05.534797  250122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 12:56:05.546352  250122 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 12:56:05.546375  250122 kubeadm.go:157] found existing configuration files:
	
	I0407 12:56:05.546429  250122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 12:56:05.556257  250122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 12:56:05.556328  250122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 12:56:05.567057  250122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 12:56:05.576497  250122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 12:56:05.576567  250122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 12:56:05.589746  250122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 12:56:05.601187  250122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 12:56:05.601260  250122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 12:56:05.616986  250122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 12:56:05.629262  250122 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 12:56:05.629344  250122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 12:56:05.641688  250122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 12:56:05.810771  250122 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 12:56:15.746891  250122 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 12:56:15.746983  250122 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 12:56:15.747098  250122 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 12:56:15.747243  250122 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 12:56:15.747336  250122 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 12:56:15.747392  250122 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 12:56:15.748715  250122 out.go:235]   - Generating certificates and keys ...
	I0407 12:56:15.748776  250122 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 12:56:15.748826  250122 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 12:56:15.748903  250122 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 12:56:15.748956  250122 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 12:56:15.749019  250122 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 12:56:15.749084  250122 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 12:56:15.749172  250122 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 12:56:15.749354  250122 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-735249 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	I0407 12:56:15.749436  250122 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 12:56:15.749544  250122 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-735249 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	I0407 12:56:15.749624  250122 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 12:56:15.749718  250122 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 12:56:15.749796  250122 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 12:56:15.749885  250122 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 12:56:15.749938  250122 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 12:56:15.749986  250122 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 12:56:15.750033  250122 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 12:56:15.750090  250122 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 12:56:15.750137  250122 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 12:56:15.750212  250122 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 12:56:15.750296  250122 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 12:56:15.752227  250122 out.go:235]   - Booting up control plane ...
	I0407 12:56:15.752313  250122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 12:56:15.752393  250122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 12:56:15.752477  250122 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 12:56:15.752565  250122 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 12:56:15.752648  250122 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 12:56:15.752685  250122 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 12:56:15.752795  250122 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 12:56:15.752917  250122 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 12:56:15.753014  250122 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.955135ms
	I0407 12:56:15.753077  250122 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 12:56:15.753145  250122 kubeadm.go:310] [api-check] The API server is healthy after 5.002234089s
	I0407 12:56:15.753247  250122 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 12:56:15.753351  250122 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 12:56:15.753400  250122 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 12:56:15.753551  250122 kubeadm.go:310] [mark-control-plane] Marking the node addons-735249 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 12:56:15.753610  250122 kubeadm.go:310] [bootstrap-token] Using token: qu49ht.6hwp7tj8ahei1aan
	I0407 12:56:15.755712  250122 out.go:235]   - Configuring RBAC rules ...
	I0407 12:56:15.755847  250122 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 12:56:15.755987  250122 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 12:56:15.756200  250122 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 12:56:15.756372  250122 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 12:56:15.756515  250122 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 12:56:15.756593  250122 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 12:56:15.756738  250122 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 12:56:15.756782  250122 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 12:56:15.756822  250122 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 12:56:15.756831  250122 kubeadm.go:310] 
	I0407 12:56:15.756884  250122 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 12:56:15.756890  250122 kubeadm.go:310] 
	I0407 12:56:15.756973  250122 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 12:56:15.756984  250122 kubeadm.go:310] 
	I0407 12:56:15.757015  250122 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 12:56:15.757081  250122 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 12:56:15.757156  250122 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 12:56:15.757165  250122 kubeadm.go:310] 
	I0407 12:56:15.757246  250122 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 12:56:15.757259  250122 kubeadm.go:310] 
	I0407 12:56:15.757328  250122 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 12:56:15.757337  250122 kubeadm.go:310] 
	I0407 12:56:15.757410  250122 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 12:56:15.757521  250122 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 12:56:15.757626  250122 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 12:56:15.757641  250122 kubeadm.go:310] 
	I0407 12:56:15.757750  250122 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 12:56:15.757836  250122 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 12:56:15.757845  250122 kubeadm.go:310] 
	I0407 12:56:15.757933  250122 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qu49ht.6hwp7tj8ahei1aan \
	I0407 12:56:15.758065  250122 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:32474e3eab886cc15ff9f2dd4bbc0173f591f56a4462b649734187e3fb003cf1 \
	I0407 12:56:15.758094  250122 kubeadm.go:310] 	--control-plane 
	I0407 12:56:15.758098  250122 kubeadm.go:310] 
	I0407 12:56:15.758167  250122 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 12:56:15.758174  250122 kubeadm.go:310] 
	I0407 12:56:15.758245  250122 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qu49ht.6hwp7tj8ahei1aan \
	I0407 12:56:15.758350  250122 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:32474e3eab886cc15ff9f2dd4bbc0173f591f56a4462b649734187e3fb003cf1 
	I0407 12:56:15.758364  250122 cni.go:84] Creating CNI manager for ""
	I0407 12:56:15.758370  250122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 12:56:15.759882  250122 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 12:56:15.761144  250122 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 12:56:15.775909  250122 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 12:56:15.792888  250122 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 12:56:15.792938  250122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:56:15.793018  250122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-735249 minikube.k8s.io/updated_at=2025_04_07T12_56_15_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=addons-735249 minikube.k8s.io/primary=true
	I0407 12:56:15.926199  250122 ops.go:34] apiserver oom_adj: -16
	I0407 12:56:15.926215  250122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:56:16.426591  250122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:56:16.927273  250122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:56:17.426953  250122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:56:17.926282  250122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:56:18.427050  250122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:56:18.926996  250122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:56:19.426522  250122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:56:19.926977  250122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:56:20.005621  250122 kubeadm.go:1113] duration metric: took 4.21273609s to wait for elevateKubeSystemPrivileges
	I0407 12:56:20.005665  250122 kubeadm.go:394] duration metric: took 14.541442011s to StartCluster
	I0407 12:56:20.005692  250122 settings.go:142] acquiring lock: {Name:mk4f0a46db7c57f47f856bd845390df879e08200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:20.005850  250122 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 12:56:20.006253  250122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/kubeconfig: {Name:mkef4208e7f217ec5ec7c15cd00232eac7047b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:56:20.006469  250122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 12:56:20.006497  250122 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 12:56:20.006562  250122 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0407 12:56:20.006679  250122 addons.go:69] Setting yakd=true in profile "addons-735249"
	I0407 12:56:20.006689  250122 addons.go:69] Setting ingress-dns=true in profile "addons-735249"
	I0407 12:56:20.006703  250122 addons.go:238] Setting addon yakd=true in "addons-735249"
	I0407 12:56:20.006707  250122 addons.go:238] Setting addon ingress-dns=true in "addons-735249"
	I0407 12:56:20.006726  250122 addons.go:69] Setting registry=true in profile "addons-735249"
	I0407 12:56:20.006727  250122 addons.go:69] Setting metrics-server=true in profile "addons-735249"
	I0407 12:56:20.006746  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.006750  250122 addons.go:69] Setting storage-provisioner=true in profile "addons-735249"
	I0407 12:56:20.006755  250122 addons.go:238] Setting addon registry=true in "addons-735249"
	I0407 12:56:20.006756  250122 addons.go:238] Setting addon metrics-server=true in "addons-735249"
	I0407 12:56:20.006762  250122 addons.go:238] Setting addon storage-provisioner=true in "addons-735249"
	I0407 12:56:20.006781  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.006764  250122 addons.go:69] Setting inspektor-gadget=true in profile "addons-735249"
	I0407 12:56:20.006790  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.006790  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.006798  250122 addons.go:238] Setting addon inspektor-gadget=true in "addons-735249"
	I0407 12:56:20.006811  250122 config.go:182] Loaded profile config "addons-735249": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:56:20.006833  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.006858  250122 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-735249"
	I0407 12:56:20.006889  250122 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-735249"
	I0407 12:56:20.006907  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.007165  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.007190  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.007202  250122 addons.go:69] Setting default-storageclass=true in profile "addons-735249"
	I0407 12:56:20.007213  250122 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-735249"
	I0407 12:56:20.007216  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.007222  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.007229  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.007236  250122 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-735249"
	I0407 12:56:20.007247  250122 addons.go:69] Setting cloud-spanner=true in profile "addons-735249"
	I0407 12:56:20.007251  250122 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-735249"
	I0407 12:56:20.007260  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.007261  250122 addons.go:69] Setting volcano=true in profile "addons-735249"
	I0407 12:56:20.007262  250122 addons.go:238] Setting addon cloud-spanner=true in "addons-735249"
	I0407 12:56:20.007273  250122 addons.go:238] Setting addon volcano=true in "addons-735249"
	I0407 12:56:20.007274  250122 addons.go:69] Setting volumesnapshots=true in profile "addons-735249"
	I0407 12:56:20.007284  250122 addons.go:238] Setting addon volumesnapshots=true in "addons-735249"
	I0407 12:56:20.007289  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.007293  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.007301  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.007224  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.007528  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.007556  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.007573  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.007588  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.007600  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.007261  250122 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-735249"
	I0407 12:56:20.007640  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.007645  250122 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-735249"
	I0407 12:56:20.007655  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.007664  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.007665  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.007670  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.007677  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.007994  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.007194  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.008036  250122 addons.go:69] Setting gcp-auth=true in profile "addons-735249"
	I0407 12:56:20.008039  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.008054  250122 mustload.go:65] Loading cluster: addons-735249
	I0407 12:56:20.008234  250122 config.go:182] Loaded profile config "addons-735249": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 12:56:20.008576  250122 addons.go:69] Setting ingress=true in profile "addons-735249"
	I0407 12:56:20.008605  250122 addons.go:238] Setting addon ingress=true in "addons-735249"
	I0407 12:56:20.008641  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.007644  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.008872  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.007251  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.007237  250122 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-735249"
	I0407 12:56:20.009145  250122 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-735249"
	I0407 12:56:20.009178  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.009377  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.006741  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.012485  250122 out.go:177] * Verifying Kubernetes components...
	I0407 12:56:20.014115  250122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:56:20.027558  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41263
	I0407 12:56:20.028928  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.028969  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.029019  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.029045  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.029214  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46825
	I0407 12:56:20.029232  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
	I0407 12:56:20.029371  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39777
	I0407 12:56:20.029626  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
	I0407 12:56:20.029807  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.029888  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.029940  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.030468  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.030508  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.030801  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.030807  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.030828  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.030935  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.031345  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.031500  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.031514  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.031581  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.031712  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.031723  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.031780  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.031840  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.032271  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.032296  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.036794  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.036982  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.036996  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.037112  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.037123  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.037173  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.037597  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.038021  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.038066  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.038515  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.038631  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.039769  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.039806  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.041145  250122 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-735249"
	I0407 12:56:20.041194  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.041539  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.041570  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.044456  250122 addons.go:238] Setting addon default-storageclass=true in "addons-735249"
	I0407 12:56:20.044515  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.044892  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.044940  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.064040  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38423
	I0407 12:56:20.067499  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37099
	I0407 12:56:20.068055  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.068555  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.068576  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.068941  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.069514  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.069558  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.070734  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33263
	I0407 12:56:20.071221  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.071609  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.071627  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.071953  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.072506  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.072546  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.075123  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41691
	I0407 12:56:20.075679  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.076123  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.076143  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.076672  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.077261  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.077304  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.085038  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.086108  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.086134  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.087025  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
	I0407 12:56:20.087664  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.088241  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.088261  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.088757  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.089437  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.089482  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.089848  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.090488  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.090536  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.090780  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36655
	I0407 12:56:20.091323  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.091417  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44371
	I0407 12:56:20.092073  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.092094  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.092585  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.093267  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37303
	I0407 12:56:20.093435  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34843
	I0407 12:56:20.093949  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.094036  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.094487  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.094505  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.094945  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.095539  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.095580  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.095984  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37797
	I0407 12:56:20.096323  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.096406  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.096861  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.096886  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.097010  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.097021  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.097407  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.097575  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.098439  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.098558  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
	I0407 12:56:20.098852  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.098918  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.099506  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.099530  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.099674  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.100187  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.100589  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.100637  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.100844  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.101012  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.101029  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.101709  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.101927  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I0407 12:56:20.103538  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0407 12:56:20.104311  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.104356  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.104550  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.104976  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.105000  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.105349  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.106216  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0407 12:56:20.106292  250122 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0407 12:56:20.107049  250122 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 12:56:20.107745  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32811
	I0407 12:56:20.107964  250122 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:56:20.107992  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 12:56:20.108014  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.108626  250122 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0407 12:56:20.108651  250122 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0407 12:56:20.108721  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.108721  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.109009  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.109403  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.109429  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.109774  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.110008  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.112656  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.112729  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I0407 12:56:20.112982  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.113733  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.113761  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.113802  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:20.114167  250122 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
	I0407 12:56:20.114197  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.114232  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.114734  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.114909  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.115046  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.115096  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.115196  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.115419  250122 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0407 12:56:20.115436  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0407 12:56:20.115454  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.115578  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.115612  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.116741  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.116958  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.117121  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.117261  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.118621  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I0407 12:56:20.118693  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.119136  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.119243  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.119455  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.119642  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.119775  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.119881  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.123208  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I0407 12:56:20.124003  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I0407 12:56:20.126348  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44129
	I0407 12:56:20.132892  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I0407 12:56:20.133417  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.133467  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.134209  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.134212  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.134373  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.134375  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.134450  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.135161  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.135194  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.135168  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.135236  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.135399  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.135408  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.135426  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.135439  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.135411  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.135514  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0407 12:56:20.135513  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.135530  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.135560  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.135572  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.135839  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.135853  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.135909  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.135955  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.136446  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.136512  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.136566  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.136582  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.136949  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.137002  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.137231  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.137296  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.137311  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.137335  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.137403  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.137452  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.137506  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.137558  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.137625  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.137707  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.137740  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.137752  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.137776  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.137847  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.138215  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.138282  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0407 12:56:20.138842  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:20.138882  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:20.139185  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.139581  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.140159  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.140167  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.140386  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.141202  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.141230  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.141357  250122 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0407 12:56:20.141797  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.142146  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.142184  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.141873  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:20.142263  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:20.141965  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.142772  250122 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:56:20.142788  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0407 12:56:20.142816  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.142892  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.142995  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:20.143020  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:20.143026  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:20.143032  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:20.143038  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:20.143293  250122 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0407 12:56:20.143703  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.143780  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:20.143804  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:20.143854  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	W0407 12:56:20.144967  250122 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0407 12:56:20.145752  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.145818  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.146277  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.146359  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.146376  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.146407  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.147153  250122 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0407 12:56:20.147160  250122 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0407 12:56:20.147180  250122 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0407 12:56:20.147197  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.147201  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.147163  250122 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0407 12:56:20.147509  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.148102  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.148372  250122 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 12:56:20.148398  250122 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 12:56:20.148417  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.148381  250122 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0407 12:56:20.148483  250122 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0407 12:56:20.148494  250122 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0407 12:56:20.148508  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.148852  250122 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0407 12:56:20.150039  250122 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:56:20.150057  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0407 12:56:20.150074  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.150607  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.151389  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.151412  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.151788  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.152055  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.152330  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.152577  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.152517  250122 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:56:20.153604  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.154036  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.154071  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.154084  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.154365  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.154543  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.154694  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.154731  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.154747  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.154846  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.154903  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.154899  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.155058  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.155174  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.155375  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.155657  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.155690  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.155864  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.156028  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.156116  250122 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:56:20.156169  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.156311  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.157743  250122 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0407 12:56:20.157763  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0407 12:56:20.157793  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.160616  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.160883  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.160901  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.161100  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.161294  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.161433  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.161571  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.165603  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0407 12:56:20.166191  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.166710  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.166738  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.167127  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.167342  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.169098  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.169342  250122 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 12:56:20.169361  250122 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 12:56:20.169381  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.172202  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0407 12:56:20.172689  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.172728  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.173038  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.173063  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.173250  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.173403  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.173460  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.173489  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.173513  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.173610  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.173902  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.174098  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.175581  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.177083  250122 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0407 12:56:20.178266  250122 out.go:177]   - Using image docker.io/registry:2.8.3
	I0407 12:56:20.179558  250122 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0407 12:56:20.179576  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0407 12:56:20.179594  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.181589  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
	I0407 12:56:20.182019  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.182520  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.182539  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.182957  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.183245  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.183317  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.183772  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0407 12:56:20.183914  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.183929  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.184163  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.184304  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.184410  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.184551  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.184975  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.185125  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.185515  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.185532  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.185901  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.186187  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.186900  250122 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0407 12:56:20.188017  250122 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0407 12:56:20.188032  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0407 12:56:20.188046  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.189439  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0407 12:56:20.189826  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.190335  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.190349  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.191290  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.191324  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.191540  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.192089  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.192148  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.192170  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.192259  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.192440  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.192578  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.193150  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.193170  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0407 12:56:20.193594  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:20.193985  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:20.194009  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:20.194429  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:20.194613  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:20.194691  250122 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0407 12:56:20.195778  250122 out.go:177]   - Using image docker.io/busybox:stable
	I0407 12:56:20.196353  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:20.196915  250122 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0407 12:56:20.196930  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0407 12:56:20.196949  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.197559  250122 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0407 12:56:20.198596  250122 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0407 12:56:20.199789  250122 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0407 12:56:20.199817  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.200242  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.200273  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.200418  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.200576  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.200725  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.200841  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.201909  250122 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0407 12:56:20.202912  250122 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0407 12:56:20.203988  250122 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0407 12:56:20.205029  250122 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0407 12:56:20.206089  250122 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0407 12:56:20.207005  250122 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0407 12:56:20.207025  250122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0407 12:56:20.207048  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:20.210122  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.210530  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:20.210555  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:20.210746  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:20.210923  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:20.211062  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:20.211219  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:20.506063  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0407 12:56:20.521052  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:56:20.576547  250122 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0407 12:56:20.576583  250122 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0407 12:56:20.583431  250122 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 12:56:20.583452  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0407 12:56:20.599849  250122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 12:56:20.599961  250122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 12:56:20.601851  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:56:20.654717  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0407 12:56:20.656722  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0407 12:56:20.659062  250122 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0407 12:56:20.659077  250122 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0407 12:56:20.660071  250122 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0407 12:56:20.660086  250122 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0407 12:56:20.663574  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:56:20.714965  250122 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0407 12:56:20.714996  250122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0407 12:56:20.717428  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0407 12:56:20.723099  250122 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:56:20.723117  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0407 12:56:20.736712  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 12:56:20.767312  250122 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0407 12:56:20.767340  250122 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0407 12:56:20.805977  250122 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:56:20.806001  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0407 12:56:20.828818  250122 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 12:56:20.828850  250122 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 12:56:20.862028  250122 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0407 12:56:20.862055  250122 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0407 12:56:20.895808  250122 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0407 12:56:20.895838  250122 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0407 12:56:20.919859  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:56:20.933653  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:56:20.943513  250122 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0407 12:56:20.943545  250122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0407 12:56:21.034659  250122 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:56:21.034695  250122 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 12:56:21.053246  250122 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0407 12:56:21.053278  250122 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0407 12:56:21.100831  250122 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:56:21.100866  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0407 12:56:21.140190  250122 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0407 12:56:21.140216  250122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0407 12:56:21.208542  250122 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0407 12:56:21.208575  250122 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0407 12:56:21.267990  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:56:21.277641  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:56:21.446944  250122 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0407 12:56:21.446974  250122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0407 12:56:21.499412  250122 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:56:21.499438  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0407 12:56:21.734514  250122 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0407 12:56:21.734543  250122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0407 12:56:21.871272  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:56:21.981167  250122 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0407 12:56:21.981195  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0407 12:56:22.323399  250122 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0407 12:56:22.323436  250122 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0407 12:56:22.699813  250122 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0407 12:56:22.699841  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0407 12:56:22.990543  250122 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0407 12:56:22.990569  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0407 12:56:23.325840  250122 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:56:23.325875  250122 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0407 12:56:23.561748  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:56:26.994659  250122 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0407 12:56:26.994707  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:26.997698  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:26.998273  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:26.998305  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:26.998501  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:26.998696  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:26.998833  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:26.998959  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:27.465118  250122 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0407 12:56:27.560541  250122 addons.go:238] Setting addon gcp-auth=true in "addons-735249"
	I0407 12:56:27.560612  250122 host.go:66] Checking if "addons-735249" exists ...
	I0407 12:56:27.560962  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:27.561023  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:27.577279  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45509
	I0407 12:56:27.577954  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:27.578497  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:27.578525  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:27.578861  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:27.579333  250122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 12:56:27.579371  250122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:56:27.594911  250122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36209
	I0407 12:56:27.595356  250122 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:56:27.595981  250122 main.go:141] libmachine: Using API Version  1
	I0407 12:56:27.596008  250122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:56:27.596489  250122 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:56:27.596673  250122 main.go:141] libmachine: (addons-735249) Calling .GetState
	I0407 12:56:27.598327  250122 main.go:141] libmachine: (addons-735249) Calling .DriverName
	I0407 12:56:27.598595  250122 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0407 12:56:27.598622  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHHostname
	I0407 12:56:27.602006  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:27.602430  250122 main.go:141] libmachine: (addons-735249) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:43:7d", ip: ""} in network mk-addons-735249: {Iface:virbr1 ExpiryTime:2025-04-07 13:55:46 +0000 UTC Type:0 Mac:52:54:00:e6:43:7d Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-735249 Clientid:01:52:54:00:e6:43:7d}
	I0407 12:56:27.602461  250122 main.go:141] libmachine: (addons-735249) DBG | domain addons-735249 has defined IP address 192.168.39.136 and MAC address 52:54:00:e6:43:7d in network mk-addons-735249
	I0407 12:56:27.602661  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHPort
	I0407 12:56:27.602892  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHKeyPath
	I0407 12:56:27.603080  250122 main.go:141] libmachine: (addons-735249) Calling .GetSSHUsername
	I0407 12:56:27.603222  250122 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/addons-735249/id_rsa Username:docker}
	I0407 12:56:29.196636  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.675544733s)
	I0407 12:56:29.196695  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.196708  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.196706  250122 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.59680775s)
	I0407 12:56:29.196640  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.690528558s)
	I0407 12:56:29.196789  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.196803  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.196807  250122 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.596822284s)
	I0407 12:56:29.196823  250122 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0407 12:56:29.196909  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.595029914s)
	I0407 12:56:29.197011  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.197024  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.197034  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.542265206s)
	I0407 12:56:29.197074  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.197089  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.197131  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.540384837s)
	I0407 12:56:29.197160  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.197170  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.197206  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.533609681s)
	I0407 12:56:29.197239  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.197249  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.197266  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.479816186s)
	I0407 12:56:29.197283  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.197291  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.197332  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.460596339s)
	I0407 12:56:29.197351  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.197359  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.197359  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.277462855s)
	I0407 12:56:29.197375  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.197395  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.197463  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.263766125s)
	I0407 12:56:29.197480  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.197490  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.197586  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.929569286s)
	I0407 12:56:29.197603  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.197619  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.197681  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.91999181s)
	I0407 12:56:29.197717  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.197731  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.197774  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.326453499s)
	W0407 12:56:29.197829  250122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0407 12:56:29.197854  250122 retry.go:31] will retry after 330.889708ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0407 12:56:29.198155  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.198182  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.198190  250122 node_ready.go:35] waiting up to 6m0s for node "addons-735249" to be "Ready" ...
	I0407 12:56:29.198212  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.198221  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.198228  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.198235  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.198294  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.198294  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.198315  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.198322  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.198326  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.198329  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.198334  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.198336  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.198342  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.198349  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.198382  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.198401  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.198408  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.198415  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.198415  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.198428  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.198435  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.198435  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.198442  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.198459  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.198466  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.198473  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.198479  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.198520  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.198538  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.198544  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.198550  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.198555  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.198595  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.198598  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.198614  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.198621  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.198628  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.198634  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.198635  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.198645  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.198420  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.200784  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.200820  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.200827  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.201026  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.201048  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.201054  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.201070  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.201076  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.201127  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.201144  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.201150  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.201157  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.201162  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.201203  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.201224  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.201230  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.201297  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.201311  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.201328  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.201334  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.201341  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.201347  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.201390  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.201396  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.201484  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.201510  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.201516  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.201524  250122 addons.go:479] Verifying addon ingress=true in "addons-735249"
	I0407 12:56:29.201811  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.201841  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.201848  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.202086  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.202112  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.202126  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.202136  250122 addons.go:479] Verifying addon metrics-server=true in "addons-735249"
	I0407 12:56:29.202350  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.202384  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.202391  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.202992  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.203007  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.203015  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.203023  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.203442  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.203475  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:29.203508  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.203564  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.204689  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.204703  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.204711  250122 addons.go:479] Verifying addon registry=true in "addons-735249"
	I0407 12:56:29.205345  250122 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-735249 service yakd-dashboard -n yakd-dashboard
	
	I0407 12:56:29.205592  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.205607  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.205678  250122 out.go:177] * Verifying ingress addon...
	I0407 12:56:29.205714  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.206016  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.206950  250122 out.go:177] * Verifying registry addon...
	I0407 12:56:29.207577  250122 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0407 12:56:29.208845  250122 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0407 12:56:29.230689  250122 node_ready.go:49] node "addons-735249" has status "Ready":"True"
	I0407 12:56:29.230717  250122 node_ready.go:38] duration metric: took 32.50975ms for node "addons-735249" to be "Ready" ...
	I0407 12:56:29.230729  250122 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:56:29.231133  250122 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0407 12:56:29.231155  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:29.231171  250122 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0407 12:56:29.231185  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:29.242926  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.242952  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.243218  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.243243  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	W0407 12:56:29.243337  250122 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0407 12:56:29.251974  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:29.251995  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:29.252330  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:29.252344  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:29.292889  250122 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-l47tr" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:29.529492  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:56:29.702457  250122 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-735249" context rescaled to 1 replicas
	I0407 12:56:29.712279  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:29.712455  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:30.217278  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:30.217555  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:30.739923  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:30.740021  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:30.770788  250122 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.17216765s)
	I0407 12:56:30.772140  250122 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:56:30.773500  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.211694155s)
	I0407 12:56:30.773558  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:30.773579  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:30.773867  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:30.773885  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:30.773897  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:30.773907  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:30.774164  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:30.774184  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:30.774198  250122 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-735249"
	I0407 12:56:30.774794  250122 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0407 12:56:30.775627  250122 out.go:177] * Verifying csi-hostpath-driver addon...
	I0407 12:56:30.776354  250122 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0407 12:56:30.776374  250122 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0407 12:56:30.777676  250122 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0407 12:56:30.808136  250122 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0407 12:56:30.808161  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:30.904630  250122 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0407 12:56:30.904664  250122 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0407 12:56:31.004575  250122 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:56:31.004602  250122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0407 12:56:31.119878  250122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:56:31.212472  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:31.213414  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:31.281423  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:31.298208  250122 pod_ready.go:103] pod "amd-gpu-device-plugin-l47tr" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:31.414401  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.884842129s)
	I0407 12:56:31.414457  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:31.414476  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:31.414797  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:31.414836  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:31.414849  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:31.414858  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:31.415081  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:31.415096  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:31.713251  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:31.713394  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:31.799111  250122 pod_ready.go:93] pod "amd-gpu-device-plugin-l47tr" in "kube-system" namespace has status "Ready":"True"
	I0407 12:56:31.799137  250122 pod_ready.go:82] duration metric: took 2.506220515s for pod "amd-gpu-device-plugin-l47tr" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:31.799148  250122 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-fdfkd" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:31.802859  250122 pod_ready.go:93] pod "coredns-668d6bf9bc-fdfkd" in "kube-system" namespace has status "Ready":"True"
	I0407 12:56:31.802879  250122 pod_ready.go:82] duration metric: took 3.723812ms for pod "coredns-668d6bf9bc-fdfkd" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:31.802887  250122 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-zs59h" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:31.807072  250122 pod_ready.go:93] pod "coredns-668d6bf9bc-zs59h" in "kube-system" namespace has status "Ready":"True"
	I0407 12:56:31.807100  250122 pod_ready.go:82] duration metric: took 4.204853ms for pod "coredns-668d6bf9bc-zs59h" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:31.807113  250122 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-735249" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:31.813707  250122 pod_ready.go:93] pod "etcd-addons-735249" in "kube-system" namespace has status "Ready":"True"
	I0407 12:56:31.813733  250122 pod_ready.go:82] duration metric: took 6.605691ms for pod "etcd-addons-735249" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:31.813741  250122 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-735249" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:31.814684  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:31.820588  250122 pod_ready.go:93] pod "kube-apiserver-addons-735249" in "kube-system" namespace has status "Ready":"True"
	I0407 12:56:31.820607  250122 pod_ready.go:82] duration metric: took 6.858783ms for pod "kube-apiserver-addons-735249" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:31.820616  250122 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-735249" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:32.201668  250122 pod_ready.go:93] pod "kube-controller-manager-addons-735249" in "kube-system" namespace has status "Ready":"True"
	I0407 12:56:32.201693  250122 pod_ready.go:82] duration metric: took 381.070068ms for pod "kube-controller-manager-addons-735249" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:32.201705  250122 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q9nxs" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:32.231996  250122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.112070339s)
	I0407 12:56:32.232056  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:32.232074  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:32.232401  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:32.232487  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:32.232501  250122 main.go:141] libmachine: Making call to close driver server
	I0407 12:56:32.232510  250122 main.go:141] libmachine: (addons-735249) Calling .Close
	I0407 12:56:32.232459  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:32.232784  250122 main.go:141] libmachine: (addons-735249) DBG | Closing plugin on server side
	I0407 12:56:32.232822  250122 main.go:141] libmachine: Successfully made call to close driver server
	I0407 12:56:32.232837  250122 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 12:56:32.233224  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:32.233325  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:32.233909  250122 addons.go:479] Verifying addon gcp-auth=true in "addons-735249"
	I0407 12:56:32.235505  250122 out.go:177] * Verifying gcp-auth addon...
	I0407 12:56:32.237632  250122 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0407 12:56:32.254879  250122 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0407 12:56:32.254899  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:32.328929  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:32.598827  250122 pod_ready.go:93] pod "kube-proxy-q9nxs" in "kube-system" namespace has status "Ready":"True"
	I0407 12:56:32.598856  250122 pod_ready.go:82] duration metric: took 397.143799ms for pod "kube-proxy-q9nxs" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:32.598870  250122 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-735249" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:32.716705  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:32.716837  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:32.740398  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:32.817351  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:32.996615  250122 pod_ready.go:93] pod "kube-scheduler-addons-735249" in "kube-system" namespace has status "Ready":"True"
	I0407 12:56:32.996642  250122 pod_ready.go:82] duration metric: took 397.764628ms for pod "kube-scheduler-addons-735249" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:32.996655  250122 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace to be "Ready" ...
	I0407 12:56:33.212184  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:33.212681  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:33.241200  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:33.281087  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:33.712178  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:33.712908  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:33.740950  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:33.781558  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:34.212788  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:34.212874  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:34.240408  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:34.281366  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:34.710492  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:34.712231  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:34.740606  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:34.781830  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:35.002483  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:35.212402  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:35.212565  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:35.241236  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:35.301120  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:35.711788  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:35.711918  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:35.741694  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:36.051857  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:36.213881  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:36.213983  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:36.241094  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:36.281114  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:36.711986  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:36.712783  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:36.812649  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:36.812766  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:37.005149  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:37.212934  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:37.213094  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:37.241045  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:37.280980  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:37.711973  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:37.712546  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:37.741070  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:37.780944  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:38.212971  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:38.213005  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:38.240647  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:38.282135  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:38.712288  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:38.713266  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:38.740812  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:38.781105  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:39.212878  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:39.213049  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:39.241183  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:39.281628  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:39.502242  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:39.711314  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:39.712974  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:39.742755  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:39.781392  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:40.212298  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:40.212585  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:40.241734  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:40.281336  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:40.711026  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:40.711854  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:40.742141  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:40.781002  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:41.212942  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:41.213090  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:41.241096  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:41.281189  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:41.503073  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:41.712251  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:41.712987  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:41.740382  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:41.781802  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:42.212519  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:42.212520  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:42.241282  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:42.282230  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:42.712590  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:42.713114  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:42.740360  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:42.782254  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:43.212591  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:43.213029  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:43.241313  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:43.281587  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:43.830949  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:43.831004  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:43.831174  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:43.831339  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:43.832570  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:44.213166  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:44.213287  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:44.242025  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:44.282282  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:44.712591  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:44.712796  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:44.740410  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:44.781196  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:45.212163  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:45.212256  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:45.240645  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:45.280888  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:45.711272  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:45.712275  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:45.741540  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:45.781851  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:46.002006  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:46.479840  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:46.480962  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:46.480976  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:46.481163  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:46.712881  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:46.712970  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:46.741608  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:46.782915  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:47.212078  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:47.213038  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:47.240889  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:47.280677  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:47.711187  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:47.711601  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:47.741985  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:47.780878  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:48.002565  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:48.211099  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:48.212769  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:48.240215  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:48.281271  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:49.028290  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:49.028586  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:49.028842  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:49.029013  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:49.214665  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:49.214814  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:49.240093  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:49.281339  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:49.711803  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:49.711847  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:49.740482  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:49.781650  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:50.004988  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:50.212229  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:50.212437  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:50.241287  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:50.281761  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:50.711890  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:50.711899  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:50.740606  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:50.782374  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:51.212433  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:51.212498  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:51.241201  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:51.281392  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:51.711660  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:51.712031  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:51.740375  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:51.781736  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:52.212888  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:52.212901  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:52.240033  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:52.281448  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:52.502907  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:52.711329  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:52.712327  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:52.741072  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:52.781462  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:53.212383  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:53.212458  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:53.241366  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:53.282353  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:53.712370  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:53.712552  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:53.741146  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:53.781808  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:54.211184  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:54.213320  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:54.240937  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:54.281849  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:54.711256  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:54.713195  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:54.741111  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:54.781347  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:55.002112  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:55.212120  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:55.212143  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:55.240914  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:55.281723  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:55.711793  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:55.711904  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:55.740274  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:55.781117  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:56.211909  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:56.212238  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:56.240908  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:56.280720  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:56.712218  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:56.712230  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:56.741758  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:56.813524  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:57.002829  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:57.212472  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:57.212589  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:57.312910  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:57.313220  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:57.712564  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:57.712642  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:57.741375  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:57.781396  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:58.210970  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:58.212660  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:58.241288  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:58.281667  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:58.710783  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:58.712350  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:58.740743  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:58.780370  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:59.003310  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:56:59.212884  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:59.212930  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:59.240813  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:59.281254  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:56:59.710818  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:56:59.712543  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:56:59.740967  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:56:59.780810  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:00.212589  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:00.212731  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:00.240887  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:00.281268  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:00.710983  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:00.712770  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:00.740266  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:00.781220  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:01.220505  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:01.220828  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:01.240233  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:01.281529  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:01.502711  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:01.712673  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:01.712785  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:01.740301  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:01.781525  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:02.211818  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:02.212130  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:02.240751  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:02.280866  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:02.712154  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:02.712713  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:02.740523  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:02.782087  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:03.213096  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:03.213119  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:03.313765  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:03.314087  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:03.711345  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:03.712581  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:03.741445  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:03.781992  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:04.001887  250122 pod_ready.go:103] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"False"
	I0407 12:57:04.212112  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:04.212508  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:04.240803  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:04.280607  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:04.711317  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:04.713350  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:04.746017  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:04.812619  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:05.213659  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:05.214671  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:05.313929  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:05.314010  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:05.714074  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:05.714826  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:05.739971  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:05.780923  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:06.002023  250122 pod_ready.go:93] pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace has status "Ready":"True"
	I0407 12:57:06.002045  250122 pod_ready.go:82] duration metric: took 33.005382797s for pod "metrics-server-7fbb699795-w467q" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:06.002055  250122 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7zt67" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:06.005312  250122 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-7zt67" in "kube-system" namespace has status "Ready":"True"
	I0407 12:57:06.005331  250122 pod_ready.go:82] duration metric: took 3.269791ms for pod "nvidia-device-plugin-daemonset-7zt67" in "kube-system" namespace to be "Ready" ...
	I0407 12:57:06.005347  250122 pod_ready.go:39] duration metric: took 36.774604445s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:57:06.005364  250122 api_server.go:52] waiting for apiserver process to appear ...
	I0407 12:57:06.005417  250122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:57:06.029151  250122 api_server.go:72] duration metric: took 46.022614541s to wait for apiserver process to appear ...
	I0407 12:57:06.029176  250122 api_server.go:88] waiting for apiserver healthz status ...
	I0407 12:57:06.029195  250122 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I0407 12:57:06.037448  250122 api_server.go:279] https://192.168.39.136:8443/healthz returned 200:
	ok
	I0407 12:57:06.038709  250122 api_server.go:141] control plane version: v1.32.2
	I0407 12:57:06.038736  250122 api_server.go:131] duration metric: took 9.553706ms to wait for apiserver health ...
	I0407 12:57:06.038745  250122 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 12:57:06.044325  250122 system_pods.go:59] 18 kube-system pods found
	I0407 12:57:06.044353  250122 system_pods.go:61] "amd-gpu-device-plugin-l47tr" [afeda0dc-c285-4bb9-bfe8-767aa6d8917b] Running
	I0407 12:57:06.044359  250122 system_pods.go:61] "coredns-668d6bf9bc-fdfkd" [695d7785-7f23-444e-beec-ff1403e60790] Running
	I0407 12:57:06.044365  250122 system_pods.go:61] "csi-hostpath-attacher-0" [fe7412af-609d-44a4-b87c-1e43f862ba4d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0407 12:57:06.044371  250122 system_pods.go:61] "csi-hostpath-resizer-0" [3ac39222-fb30-49a1-8c89-54c90089d148] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0407 12:57:06.044378  250122 system_pods.go:61] "csi-hostpathplugin-jlkqn" [ed28bf91-8cc2-4a8b-bcbb-17b535a4b548] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0407 12:57:06.044383  250122 system_pods.go:61] "etcd-addons-735249" [8c3f0f86-7aef-43ba-913d-fdfd2955a53d] Running
	I0407 12:57:06.044387  250122 system_pods.go:61] "kube-apiserver-addons-735249" [73833bf4-ccc1-45a9-8c47-4b82df4f4e60] Running
	I0407 12:57:06.044390  250122 system_pods.go:61] "kube-controller-manager-addons-735249" [d7737628-a05e-41a7-ab44-170d2f7a8479] Running
	I0407 12:57:06.044394  250122 system_pods.go:61] "kube-ingress-dns-minikube" [dfd76782-43d1-4f7b-a621-dbb37fb1e32b] Running
	I0407 12:57:06.044397  250122 system_pods.go:61] "kube-proxy-q9nxs" [87028979-81e9-4a2e-aa1c-4a42ec92a2dd] Running
	I0407 12:57:06.044400  250122 system_pods.go:61] "kube-scheduler-addons-735249" [bfc8f482-0969-452d-8c62-168e10062794] Running
	I0407 12:57:06.044404  250122 system_pods.go:61] "metrics-server-7fbb699795-w467q" [cfd66a0f-4642-418e-8488-660d68bd0187] Running
	I0407 12:57:06.044407  250122 system_pods.go:61] "nvidia-device-plugin-daemonset-7zt67" [dba13539-6120-49a5-8bee-1dccb5579bec] Running
	I0407 12:57:06.044411  250122 system_pods.go:61] "registry-6c88467877-vz6qp" [3cb295f5-143a-4936-a222-d574355c2a0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0407 12:57:06.044418  250122 system_pods.go:61] "registry-proxy-bcl68" [16a879e9-1f5d-4e62-846d-b9dbbbf00755] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0407 12:57:06.044434  250122 system_pods.go:61] "snapshot-controller-68b874b76f-5bcvw" [65ce741a-0065-43c1-945c-5c751d4ce0c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:57:06.044441  250122 system_pods.go:61] "snapshot-controller-68b874b76f-d6mbx" [df121e3e-c205-4146-976d-035d5d1bad75] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:57:06.044446  250122 system_pods.go:61] "storage-provisioner" [7766dae1-8f40-4e3f-821e-d387b3a741e4] Running
	I0407 12:57:06.044453  250122 system_pods.go:74] duration metric: took 5.702438ms to wait for pod list to return data ...
	I0407 12:57:06.044463  250122 default_sa.go:34] waiting for default service account to be created ...
	I0407 12:57:06.052409  250122 default_sa.go:45] found service account: "default"
	I0407 12:57:06.052452  250122 default_sa.go:55] duration metric: took 7.98131ms for default service account to be created ...
	I0407 12:57:06.052464  250122 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 12:57:06.059292  250122 system_pods.go:86] 18 kube-system pods found
	I0407 12:57:06.059320  250122 system_pods.go:89] "amd-gpu-device-plugin-l47tr" [afeda0dc-c285-4bb9-bfe8-767aa6d8917b] Running
	I0407 12:57:06.059325  250122 system_pods.go:89] "coredns-668d6bf9bc-fdfkd" [695d7785-7f23-444e-beec-ff1403e60790] Running
	I0407 12:57:06.059332  250122 system_pods.go:89] "csi-hostpath-attacher-0" [fe7412af-609d-44a4-b87c-1e43f862ba4d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0407 12:57:06.059338  250122 system_pods.go:89] "csi-hostpath-resizer-0" [3ac39222-fb30-49a1-8c89-54c90089d148] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0407 12:57:06.059345  250122 system_pods.go:89] "csi-hostpathplugin-jlkqn" [ed28bf91-8cc2-4a8b-bcbb-17b535a4b548] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0407 12:57:06.059350  250122 system_pods.go:89] "etcd-addons-735249" [8c3f0f86-7aef-43ba-913d-fdfd2955a53d] Running
	I0407 12:57:06.059354  250122 system_pods.go:89] "kube-apiserver-addons-735249" [73833bf4-ccc1-45a9-8c47-4b82df4f4e60] Running
	I0407 12:57:06.059358  250122 system_pods.go:89] "kube-controller-manager-addons-735249" [d7737628-a05e-41a7-ab44-170d2f7a8479] Running
	I0407 12:57:06.059363  250122 system_pods.go:89] "kube-ingress-dns-minikube" [dfd76782-43d1-4f7b-a621-dbb37fb1e32b] Running
	I0407 12:57:06.059366  250122 system_pods.go:89] "kube-proxy-q9nxs" [87028979-81e9-4a2e-aa1c-4a42ec92a2dd] Running
	I0407 12:57:06.059369  250122 system_pods.go:89] "kube-scheduler-addons-735249" [bfc8f482-0969-452d-8c62-168e10062794] Running
	I0407 12:57:06.059372  250122 system_pods.go:89] "metrics-server-7fbb699795-w467q" [cfd66a0f-4642-418e-8488-660d68bd0187] Running
	I0407 12:57:06.059376  250122 system_pods.go:89] "nvidia-device-plugin-daemonset-7zt67" [dba13539-6120-49a5-8bee-1dccb5579bec] Running
	I0407 12:57:06.059380  250122 system_pods.go:89] "registry-6c88467877-vz6qp" [3cb295f5-143a-4936-a222-d574355c2a0d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0407 12:57:06.059385  250122 system_pods.go:89] "registry-proxy-bcl68" [16a879e9-1f5d-4e62-846d-b9dbbbf00755] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0407 12:57:06.059393  250122 system_pods.go:89] "snapshot-controller-68b874b76f-5bcvw" [65ce741a-0065-43c1-945c-5c751d4ce0c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:57:06.059399  250122 system_pods.go:89] "snapshot-controller-68b874b76f-d6mbx" [df121e3e-c205-4146-976d-035d5d1bad75] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:57:06.059405  250122 system_pods.go:89] "storage-provisioner" [7766dae1-8f40-4e3f-821e-d387b3a741e4] Running
	I0407 12:57:06.059412  250122 system_pods.go:126] duration metric: took 6.942432ms to wait for k8s-apps to be running ...
	I0407 12:57:06.059422  250122 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 12:57:06.059468  250122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:57:06.076619  250122 system_svc.go:56] duration metric: took 17.188194ms WaitForService to wait for kubelet
	I0407 12:57:06.076643  250122 kubeadm.go:582] duration metric: took 46.070113134s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:57:06.076662  250122 node_conditions.go:102] verifying NodePressure condition ...
	I0407 12:57:06.079979  250122 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 12:57:06.080002  250122 node_conditions.go:123] node cpu capacity is 2
	I0407 12:57:06.080032  250122 node_conditions.go:105] duration metric: took 3.36554ms to run NodePressure ...
	I0407 12:57:06.080043  250122 start.go:241] waiting for startup goroutines ...
	I0407 12:57:06.212028  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:06.213333  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:06.241317  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:06.282059  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:06.713367  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:06.713464  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:06.741688  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:06.781576  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:07.212009  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:07.212018  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:07.245029  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:07.312117  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:07.713693  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:07.713994  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:07.741122  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:07.781896  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:08.212417  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:08.212446  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:08.240730  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:08.280907  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:08.712234  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:08.712329  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:08.741811  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:08.781032  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:09.211787  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:09.212291  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:09.241451  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:09.312595  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:09.710739  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:09.712470  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:09.741556  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:09.783108  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:10.215135  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:10.215214  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:10.316201  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:10.316254  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:10.711199  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:10.712387  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:10.740744  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:10.781215  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:11.212595  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:11.212749  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:11.241274  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:11.313555  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:11.712069  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:11.712309  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:11.740636  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:11.781424  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:12.210592  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:12.212227  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:12.240787  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:12.280832  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:12.712741  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:12.712870  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:12.740281  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:12.781425  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:13.210799  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:13.212940  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:13.240407  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:13.281677  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:13.711545  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:13.712648  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:13.741811  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:13.780946  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:14.212340  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:14.212364  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:14.240710  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:14.281682  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:14.711047  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:14.712450  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:57:14.740781  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:14.782891  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:15.211863  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:15.212044  250122 kapi.go:107] duration metric: took 46.003195635s to wait for kubernetes.io/minikube-addons=registry ...
	I0407 12:57:15.240691  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:15.282191  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:15.712613  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:15.741616  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:15.781901  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:16.211644  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:16.241374  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:16.283425  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:16.710727  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:16.741561  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:16.781958  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:17.211334  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:17.241406  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:17.281962  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:17.817216  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:17.817845  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:17.818326  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:18.210565  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:18.241356  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:18.281903  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:18.711201  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:18.740828  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:18.781679  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:19.211100  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:19.240721  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:19.282001  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:19.712125  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:19.741400  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:19.782405  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:20.227197  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:20.242687  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:20.582263  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:20.711778  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:20.812914  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:20.813177  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:21.215778  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:21.317083  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:21.317095  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:21.715498  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:21.741064  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:21.780901  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:22.211591  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:22.241169  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:22.281936  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:22.711251  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:22.743097  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:22.781448  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:23.212597  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:23.241150  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:23.281445  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:23.711116  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:23.743371  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:23.781588  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:24.211327  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:24.242133  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:24.282508  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:24.711135  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:24.740723  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:24.781668  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:25.211241  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:25.240774  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:25.311681  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:25.711539  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:25.741128  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:25.781570  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:26.211606  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:26.240955  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:26.281073  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:26.711733  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:26.741478  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:26.781365  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:27.212322  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:27.241035  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:27.281022  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:27.711977  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:27.759119  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:27.780914  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:28.213043  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:28.312928  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:28.313947  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:28.711385  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:28.741029  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:28.781504  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:29.211835  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:29.240456  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:29.281425  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:29.714375  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:29.741330  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:29.781117  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:30.210963  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:30.240319  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:30.281266  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:30.710952  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:30.740368  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:30.781322  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:31.212256  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:31.240564  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:31.281926  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:31.715460  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:31.750620  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:31.816070  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:32.211560  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:32.312035  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:32.312101  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:32.711585  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:32.741189  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:32.781162  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:33.212432  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:33.241491  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:33.282275  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:33.714326  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:33.813739  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:33.813971  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:34.229949  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:34.240176  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:34.281188  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:34.711980  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:34.740930  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:34.781700  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:35.211621  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:35.241335  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:35.281778  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:35.711016  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:35.812133  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:35.812407  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:36.211372  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:36.312828  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:36.312875  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:36.711975  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:36.743247  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:36.781979  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:37.212488  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:37.241565  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:37.294052  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:57:37.711214  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:37.812406  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:37.812618  250122 kapi.go:107] duration metric: took 1m7.034940549s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0407 12:57:38.212237  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:38.241307  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:38.711019  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:38.740731  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:39.211506  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:39.241194  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:39.913559  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:39.915235  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:40.211798  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:40.241587  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:40.711023  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:40.740702  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:41.211161  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:41.240936  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:41.711526  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:41.741305  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:42.400357  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:42.401102  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:42.711771  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:42.741034  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:43.214864  250122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:57:43.240149  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:43.712216  250122 kapi.go:107] duration metric: took 1m14.504633936s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0407 12:57:43.740957  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:44.267088  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:44.740970  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:45.246992  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:45.745239  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:46.240486  250122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:57:46.741213  250122 kapi.go:107] duration metric: took 1m14.503577968s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0407 12:57:46.742814  250122 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-735249 cluster.
	I0407 12:57:46.743938  250122 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0407 12:57:46.745078  250122 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0407 12:57:46.746237  250122 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, metrics-server, nvidia-device-plugin, inspektor-gadget, amd-gpu-device-plugin, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0407 12:57:46.747270  250122 addons.go:514] duration metric: took 1m26.740712968s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns metrics-server nvidia-device-plugin inspektor-gadget amd-gpu-device-plugin yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0407 12:57:46.747307  250122 start.go:246] waiting for cluster config update ...
	I0407 12:57:46.747325  250122 start.go:255] writing updated cluster config ...
	I0407 12:57:46.747588  250122 ssh_runner.go:195] Run: rm -f paused
	I0407 12:57:46.799940  250122 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 12:57:46.801495  250122 out.go:177] * Done! kubectl is now configured to use "addons-735249" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.642089476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05514232-97ed-4831-a62d-7a47b99999a0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.642495403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7f25ee07ef712945151bb5c68437a06b1459b816f70f7c5d0fb57843953364a,PodSandboxId:cb7b9a59cd994c86ab271a0fd9fa33fa3a2adb3132f8e5b1e0664b5a6d407efe,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744030713566574900,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dff3b731-4ff4-4616-93c3-ecb41f13454c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10dc45c3aa62210f5fce942c1770bfe85c94134789849bb884c54c7ea43260ba,PodSandboxId:e6836b0570d7f9a0dabf1e7050140baa05207054cf61cbb83de62b517bba4f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744030671504809763,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8ef8c44-629b-404a-b551-3d7bcccc3d86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08cd13e4f92209b66d31cb88320b145ad6c569acdf7bf644e0a990168bc7aec4,PodSandboxId:d18b52a5bf48b96cb21a2a029bc3100a08f7aad5d54e9f59778d8325ee545373,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744030662546806455,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-dfr6p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 966287f8-2ded-460f-b1c2-fdc79815e037,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5292e09baa92b6e88508d3efbb0a4832ac870bb00d8704cf07e0460e4c78a278,PodSandboxId:a7a0c7b2ccd5ab096ca32435939228448cedc771fe4382f662033be51184bdad,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744030643385508520,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-frvwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80b876f8-0a66-42fa-8160-30849f75c70b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93694afed0f11d9238b68d44b6de33170d154eec4b1ddd7a0b0f0073a6bf08b,PodSandboxId:46ac6e7274dcfff4c977eb389da9c7e07ee35e01bd5a4b633477557e93fdc3d8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744030643275737496,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7jnsv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e21e9cb1-4ec6-425d-9970-935294cb35aa,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3449333d3d1c020e1467b01156b3ff8598ec91cc9ca43b30288eb2d94c4ca737,PodSandboxId:ea91176bdb8d84ab58619e70a1c1aadffc8b32db203af9826644c386c3d2754c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1744030630117865927,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-tkfxp,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d6bb7527-cfbc-41eb-9f40-862de60a9106,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:368f2a48512430c65dabd7cea54fb3940f52f2d15b29b959922d4397a99f19ea,PodSandboxId:f615b7a6f69ce6ac98d68dab26eb859f8eeea73109c46f2b97a5e65d1adb2ddb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5
b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744030598886605065,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfd76782-43d1-4f7b-a621-dbb37fb1e32b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe5de6cc20d75beac1084085509bf23c4a30ed4053f24cf3c2d2e947b0684dd,PodSandboxId:8acceffba04473742069564cd3500c2afef358fb0f9aead6e5dc6a06f577593d,Metada
ta:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744030590782748884,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-l47tr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afeda0dc-c285-4bb9-bfe8-767aa6d8917b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0208ac4c28a8f98f739940033c91926d06c2492a2b02a01431bdf23c11d3c5ab,PodSandboxId:de549cc58b36dafef44265954708a2f
ef248412a4b464d98ddb96da949977185,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744030586711051170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7766dae1-8f40-4e3f-821e-d387b3a741e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a4f5122790db40976f411b62d30a3306fa3c8ec2780a5678b49e4fecdefe14,PodSandboxId:35d45530c065530ae5a840c6955ddcb29678d83580b
64c99f91a5360191f3bb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744030586274894435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fdfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695d7785-7f23-444e-beec-ff1403e60790,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de6345fef4d981b6c4ebf43f0a11fefdb21a933c43b6fe5a80ff4e29ba725336,PodSandboxId:d58339ffde8e347218b05586d97b3d498a2fa75fb222388de389fc4ed7e06f7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744030581064442845,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q9nxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87028979-81e9-4a2e-aa1c-4a42ec92a2dd,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:7bddb09b897171b09d9e894e522216500be8b0f45ca7842b433a50e1a5aefed4,PodSandboxId:ed1a1c08c3116113f40c9a579a4bc5de0cf7595cbe5c9134799004bb7968453a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744030569992138492,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 898109d5768a25601905afc193f78b62,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:aa530338ac611679582693b0776a88d12a32f8038ae1cd45be9f75d6fdcb84b7,PodSandboxId:baf7540b58df4ae4e7a96d3d01475ccb3109e71ea010426f0529c9d43380204c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744030569990663703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724d24b8ffe0793242de3e1fd8b7a519,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:1ab2c27bf583f8008518612f2ffabd5b8d5f8e03328ab525547ecb634eef5605,PodSandboxId:f7e0aeea2069ba94c595e28b4e96d2f8d653bcdcef4fed4424ba2e8f0617fb5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744030569956340724,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18548150cc3cf52961216ad24ad6fa3f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d7d787b4b8497b30afc09be2156d463a630ed8d
ef0b79fb98e886b00fee661,PodSandboxId:de1c607842a926a6ae6fb5c482f957197315ebc5a2332c9da21067cb0cf48d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744030569925327904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd1e83637230add1f8c0e178f52b476f,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74"
id=05514232-97ed-4831-a62d-7a47b99999a0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.683712592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9b66eff-475c-4fa0-8fb4-7828ad931261 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.683812592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9b66eff-475c-4fa0-8fb4-7828ad931261 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.685040928Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ad22417-2d56-4c78-99eb-076ebdda9b29 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.686227045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030852686196762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595373,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ad22417-2d56-4c78-99eb-076ebdda9b29 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.686917393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9da661d-cd9e-4cb6-b550-44fa6825fdb7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.686975522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9da661d-cd9e-4cb6-b550-44fa6825fdb7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.687322415Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7f25ee07ef712945151bb5c68437a06b1459b816f70f7c5d0fb57843953364a,PodSandboxId:cb7b9a59cd994c86ab271a0fd9fa33fa3a2adb3132f8e5b1e0664b5a6d407efe,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744030713566574900,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dff3b731-4ff4-4616-93c3-ecb41f13454c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10dc45c3aa62210f5fce942c1770bfe85c94134789849bb884c54c7ea43260ba,PodSandboxId:e6836b0570d7f9a0dabf1e7050140baa05207054cf61cbb83de62b517bba4f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744030671504809763,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8ef8c44-629b-404a-b551-3d7bcccc3d86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08cd13e4f92209b66d31cb88320b145ad6c569acdf7bf644e0a990168bc7aec4,PodSandboxId:d18b52a5bf48b96cb21a2a029bc3100a08f7aad5d54e9f59778d8325ee545373,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744030662546806455,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-dfr6p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 966287f8-2ded-460f-b1c2-fdc79815e037,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5292e09baa92b6e88508d3efbb0a4832ac870bb00d8704cf07e0460e4c78a278,PodSandboxId:a7a0c7b2ccd5ab096ca32435939228448cedc771fe4382f662033be51184bdad,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744030643385508520,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-frvwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80b876f8-0a66-42fa-8160-30849f75c70b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93694afed0f11d9238b68d44b6de33170d154eec4b1ddd7a0b0f0073a6bf08b,PodSandboxId:46ac6e7274dcfff4c977eb389da9c7e07ee35e01bd5a4b633477557e93fdc3d8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744030643275737496,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7jnsv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e21e9cb1-4ec6-425d-9970-935294cb35aa,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3449333d3d1c020e1467b01156b3ff8598ec91cc9ca43b30288eb2d94c4ca737,PodSandboxId:ea91176bdb8d84ab58619e70a1c1aadffc8b32db203af9826644c386c3d2754c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1744030630117865927,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-tkfxp,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d6bb7527-cfbc-41eb-9f40-862de60a9106,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:368f2a48512430c65dabd7cea54fb3940f52f2d15b29b959922d4397a99f19ea,PodSandboxId:f615b7a6f69ce6ac98d68dab26eb859f8eeea73109c46f2b97a5e65d1adb2ddb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5
b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744030598886605065,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfd76782-43d1-4f7b-a621-dbb37fb1e32b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe5de6cc20d75beac1084085509bf23c4a30ed4053f24cf3c2d2e947b0684dd,PodSandboxId:8acceffba04473742069564cd3500c2afef358fb0f9aead6e5dc6a06f577593d,Metada
ta:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744030590782748884,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-l47tr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afeda0dc-c285-4bb9-bfe8-767aa6d8917b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0208ac4c28a8f98f739940033c91926d06c2492a2b02a01431bdf23c11d3c5ab,PodSandboxId:de549cc58b36dafef44265954708a2f
ef248412a4b464d98ddb96da949977185,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744030586711051170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7766dae1-8f40-4e3f-821e-d387b3a741e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a4f5122790db40976f411b62d30a3306fa3c8ec2780a5678b49e4fecdefe14,PodSandboxId:35d45530c065530ae5a840c6955ddcb29678d83580b
64c99f91a5360191f3bb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744030586274894435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fdfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695d7785-7f23-444e-beec-ff1403e60790,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de6345fef4d981b6c4ebf43f0a11fefdb21a933c43b6fe5a80ff4e29ba725336,PodSandboxId:d58339ffde8e347218b05586d97b3d498a2fa75fb222388de389fc4ed7e06f7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744030581064442845,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q9nxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87028979-81e9-4a2e-aa1c-4a42ec92a2dd,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:7bddb09b897171b09d9e894e522216500be8b0f45ca7842b433a50e1a5aefed4,PodSandboxId:ed1a1c08c3116113f40c9a579a4bc5de0cf7595cbe5c9134799004bb7968453a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744030569992138492,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 898109d5768a25601905afc193f78b62,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:aa530338ac611679582693b0776a88d12a32f8038ae1cd45be9f75d6fdcb84b7,PodSandboxId:baf7540b58df4ae4e7a96d3d01475ccb3109e71ea010426f0529c9d43380204c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744030569990663703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724d24b8ffe0793242de3e1fd8b7a519,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:1ab2c27bf583f8008518612f2ffabd5b8d5f8e03328ab525547ecb634eef5605,PodSandboxId:f7e0aeea2069ba94c595e28b4e96d2f8d653bcdcef4fed4424ba2e8f0617fb5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744030569956340724,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18548150cc3cf52961216ad24ad6fa3f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d7d787b4b8497b30afc09be2156d463a630ed8d
ef0b79fb98e886b00fee661,PodSandboxId:de1c607842a926a6ae6fb5c482f957197315ebc5a2332c9da21067cb0cf48d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744030569925327904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd1e83637230add1f8c0e178f52b476f,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74"
id=c9da661d-cd9e-4cb6-b550-44fa6825fdb7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.727997738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ae77fc7-3aa8-4480-adc2-6491d56f5b56 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.728085855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ae77fc7-3aa8-4480-adc2-6491d56f5b56 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.729012421Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1daf006d-b445-4782-999d-c5a7ae082598 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.730354679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030852730327238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595373,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1daf006d-b445-4782-999d-c5a7ae082598 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.730845076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f45cc892-ae23-4297-bd57-1c8cd3db7477 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.730916132Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f45cc892-ae23-4297-bd57-1c8cd3db7477 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.731304544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7f25ee07ef712945151bb5c68437a06b1459b816f70f7c5d0fb57843953364a,PodSandboxId:cb7b9a59cd994c86ab271a0fd9fa33fa3a2adb3132f8e5b1e0664b5a6d407efe,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744030713566574900,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dff3b731-4ff4-4616-93c3-ecb41f13454c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10dc45c3aa62210f5fce942c1770bfe85c94134789849bb884c54c7ea43260ba,PodSandboxId:e6836b0570d7f9a0dabf1e7050140baa05207054cf61cbb83de62b517bba4f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744030671504809763,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8ef8c44-629b-404a-b551-3d7bcccc3d86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08cd13e4f92209b66d31cb88320b145ad6c569acdf7bf644e0a990168bc7aec4,PodSandboxId:d18b52a5bf48b96cb21a2a029bc3100a08f7aad5d54e9f59778d8325ee545373,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744030662546806455,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-dfr6p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 966287f8-2ded-460f-b1c2-fdc79815e037,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5292e09baa92b6e88508d3efbb0a4832ac870bb00d8704cf07e0460e4c78a278,PodSandboxId:a7a0c7b2ccd5ab096ca32435939228448cedc771fe4382f662033be51184bdad,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744030643385508520,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-frvwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80b876f8-0a66-42fa-8160-30849f75c70b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93694afed0f11d9238b68d44b6de33170d154eec4b1ddd7a0b0f0073a6bf08b,PodSandboxId:46ac6e7274dcfff4c977eb389da9c7e07ee35e01bd5a4b633477557e93fdc3d8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744030643275737496,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7jnsv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e21e9cb1-4ec6-425d-9970-935294cb35aa,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3449333d3d1c020e1467b01156b3ff8598ec91cc9ca43b30288eb2d94c4ca737,PodSandboxId:ea91176bdb8d84ab58619e70a1c1aadffc8b32db203af9826644c386c3d2754c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1744030630117865927,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-tkfxp,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d6bb7527-cfbc-41eb-9f40-862de60a9106,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:368f2a48512430c65dabd7cea54fb3940f52f2d15b29b959922d4397a99f19ea,PodSandboxId:f615b7a6f69ce6ac98d68dab26eb859f8eeea73109c46f2b97a5e65d1adb2ddb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5
b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744030598886605065,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfd76782-43d1-4f7b-a621-dbb37fb1e32b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe5de6cc20d75beac1084085509bf23c4a30ed4053f24cf3c2d2e947b0684dd,PodSandboxId:8acceffba04473742069564cd3500c2afef358fb0f9aead6e5dc6a06f577593d,Metada
ta:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744030590782748884,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-l47tr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afeda0dc-c285-4bb9-bfe8-767aa6d8917b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0208ac4c28a8f98f739940033c91926d06c2492a2b02a01431bdf23c11d3c5ab,PodSandboxId:de549cc58b36dafef44265954708a2f
ef248412a4b464d98ddb96da949977185,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744030586711051170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7766dae1-8f40-4e3f-821e-d387b3a741e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a4f5122790db40976f411b62d30a3306fa3c8ec2780a5678b49e4fecdefe14,PodSandboxId:35d45530c065530ae5a840c6955ddcb29678d83580b
64c99f91a5360191f3bb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744030586274894435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fdfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695d7785-7f23-444e-beec-ff1403e60790,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de6345fef4d981b6c4ebf43f0a11fefdb21a933c43b6fe5a80ff4e29ba725336,PodSandboxId:d58339ffde8e347218b05586d97b3d498a2fa75fb222388de389fc4ed7e06f7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744030581064442845,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q9nxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87028979-81e9-4a2e-aa1c-4a42ec92a2dd,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:7bddb09b897171b09d9e894e522216500be8b0f45ca7842b433a50e1a5aefed4,PodSandboxId:ed1a1c08c3116113f40c9a579a4bc5de0cf7595cbe5c9134799004bb7968453a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744030569992138492,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 898109d5768a25601905afc193f78b62,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:aa530338ac611679582693b0776a88d12a32f8038ae1cd45be9f75d6fdcb84b7,PodSandboxId:baf7540b58df4ae4e7a96d3d01475ccb3109e71ea010426f0529c9d43380204c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744030569990663703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724d24b8ffe0793242de3e1fd8b7a519,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:1ab2c27bf583f8008518612f2ffabd5b8d5f8e03328ab525547ecb634eef5605,PodSandboxId:f7e0aeea2069ba94c595e28b4e96d2f8d653bcdcef4fed4424ba2e8f0617fb5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744030569956340724,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18548150cc3cf52961216ad24ad6fa3f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d7d787b4b8497b30afc09be2156d463a630ed8d
ef0b79fb98e886b00fee661,PodSandboxId:de1c607842a926a6ae6fb5c482f957197315ebc5a2332c9da21067cb0cf48d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744030569925327904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd1e83637230add1f8c0e178f52b476f,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74"
id=f45cc892-ae23-4297-bd57-1c8cd3db7477 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.742801485Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.743009577Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.765174581Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=749bb662-d2fb-4671-8f08-743111c58f8c name=/runtime.v1.RuntimeService/Version
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.765362777Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=749bb662-d2fb-4671-8f08-743111c58f8c name=/runtime.v1.RuntimeService/Version
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.766534420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f999dad-faa7-41b6-883b-9d3cc06086e2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.767711741Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030852767682913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595373,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f999dad-faa7-41b6-883b-9d3cc06086e2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.768521304Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df297e04-7ef9-4a68-bd91-dbbfa244384b name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.768586953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df297e04-7ef9-4a68-bd91-dbbfa244384b name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:00:52 addons-735249 crio[658]: time="2025-04-07 13:00:52.768904371Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7f25ee07ef712945151bb5c68437a06b1459b816f70f7c5d0fb57843953364a,PodSandboxId:cb7b9a59cd994c86ab271a0fd9fa33fa3a2adb3132f8e5b1e0664b5a6d407efe,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744030713566574900,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dff3b731-4ff4-4616-93c3-ecb41f13454c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10dc45c3aa62210f5fce942c1770bfe85c94134789849bb884c54c7ea43260ba,PodSandboxId:e6836b0570d7f9a0dabf1e7050140baa05207054cf61cbb83de62b517bba4f86,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744030671504809763,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8ef8c44-629b-404a-b551-3d7bcccc3d86,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08cd13e4f92209b66d31cb88320b145ad6c569acdf7bf644e0a990168bc7aec4,PodSandboxId:d18b52a5bf48b96cb21a2a029bc3100a08f7aad5d54e9f59778d8325ee545373,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744030662546806455,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-dfr6p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 966287f8-2ded-460f-b1c2-fdc79815e037,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5292e09baa92b6e88508d3efbb0a4832ac870bb00d8704cf07e0460e4c78a278,PodSandboxId:a7a0c7b2ccd5ab096ca32435939228448cedc771fe4382f662033be51184bdad,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744030643385508520,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-frvwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80b876f8-0a66-42fa-8160-30849f75c70b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e93694afed0f11d9238b68d44b6de33170d154eec4b1ddd7a0b0f0073a6bf08b,PodSandboxId:46ac6e7274dcfff4c977eb389da9c7e07ee35e01bd5a4b633477557e93fdc3d8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744030643275737496,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7jnsv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e21e9cb1-4ec6-425d-9970-935294cb35aa,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3449333d3d1c020e1467b01156b3ff8598ec91cc9ca43b30288eb2d94c4ca737,PodSandboxId:ea91176bdb8d84ab58619e70a1c1aadffc8b32db203af9826644c386c3d2754c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1744030630117865927,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-tkfxp,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d6bb7527-cfbc-41eb-9f40-862de60a9106,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:368f2a48512430c65dabd7cea54fb3940f52f2d15b29b959922d4397a99f19ea,PodSandboxId:f615b7a6f69ce6ac98d68dab26eb859f8eeea73109c46f2b97a5e65d1adb2ddb,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5
b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744030598886605065,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfd76782-43d1-4f7b-a621-dbb37fb1e32b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe5de6cc20d75beac1084085509bf23c4a30ed4053f24cf3c2d2e947b0684dd,PodSandboxId:8acceffba04473742069564cd3500c2afef358fb0f9aead6e5dc6a06f577593d,Metada
ta:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744030590782748884,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-l47tr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afeda0dc-c285-4bb9-bfe8-767aa6d8917b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0208ac4c28a8f98f739940033c91926d06c2492a2b02a01431bdf23c11d3c5ab,PodSandboxId:de549cc58b36dafef44265954708a2f
ef248412a4b464d98ddb96da949977185,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744030586711051170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7766dae1-8f40-4e3f-821e-d387b3a741e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a4f5122790db40976f411b62d30a3306fa3c8ec2780a5678b49e4fecdefe14,PodSandboxId:35d45530c065530ae5a840c6955ddcb29678d83580b
64c99f91a5360191f3bb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744030586274894435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fdfkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695d7785-7f23-444e-beec-ff1403e60790,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de6345fef4d981b6c4ebf43f0a11fefdb21a933c43b6fe5a80ff4e29ba725336,PodSandboxId:d58339ffde8e347218b05586d97b3d498a2fa75fb222388de389fc4ed7e06f7f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744030581064442845,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q9nxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87028979-81e9-4a2e-aa1c-4a42ec92a2dd,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:7bddb09b897171b09d9e894e522216500be8b0f45ca7842b433a50e1a5aefed4,PodSandboxId:ed1a1c08c3116113f40c9a579a4bc5de0cf7595cbe5c9134799004bb7968453a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744030569992138492,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 898109d5768a25601905afc193f78b62,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:aa530338ac611679582693b0776a88d12a32f8038ae1cd45be9f75d6fdcb84b7,PodSandboxId:baf7540b58df4ae4e7a96d3d01475ccb3109e71ea010426f0529c9d43380204c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744030569990663703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724d24b8ffe0793242de3e1fd8b7a519,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:1ab2c27bf583f8008518612f2ffabd5b8d5f8e03328ab525547ecb634eef5605,PodSandboxId:f7e0aeea2069ba94c595e28b4e96d2f8d653bcdcef4fed4424ba2e8f0617fb5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744030569956340724,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18548150cc3cf52961216ad24ad6fa3f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d7d787b4b8497b30afc09be2156d463a630ed8d
ef0b79fb98e886b00fee661,PodSandboxId:de1c607842a926a6ae6fb5c482f957197315ebc5a2332c9da21067cb0cf48d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744030569925327904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-735249,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd1e83637230add1f8c0e178f52b476f,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74"
id=df297e04-7ef9-4a68-bd91-dbbfa244384b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d7f25ee07ef71       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   cb7b9a59cd994       nginx
	10dc45c3aa622       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   e6836b0570d7f       busybox
	08cd13e4f9220       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   d18b52a5bf48b       ingress-nginx-controller-56d7c84fd4-dfr6p
	5292e09baa92b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   a7a0c7b2ccd5a       ingress-nginx-admission-patch-frvwj
	e93694afed0f1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   46ac6e7274dcf       ingress-nginx-admission-create-7jnsv
	3449333d3d1c0       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   ea91176bdb8d8       local-path-provisioner-76f89f99b5-tkfxp
	368f2a4851243       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   f615b7a6f69ce       kube-ingress-dns-minikube
	1fe5de6cc20d7       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   8acceffba0447       amd-gpu-device-plugin-l47tr
	0208ac4c28a8f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   de549cc58b36d       storage-provisioner
	65a4f5122790d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   35d45530c0655       coredns-668d6bf9bc-fdfkd
	de6345fef4d98       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             4 minutes ago       Running             kube-proxy                0                   d58339ffde8e3       kube-proxy-q9nxs
	7bddb09b89717       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             4 minutes ago       Running             kube-apiserver            0                   ed1a1c08c3116       kube-apiserver-addons-735249
	aa530338ac611       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             4 minutes ago       Running             kube-scheduler            0                   baf7540b58df4       kube-scheduler-addons-735249
	1ab2c27bf583f       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   f7e0aeea2069b       etcd-addons-735249
	26d7d787b4b84       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             4 minutes ago       Running             kube-controller-manager   0                   de1c607842a92       kube-controller-manager-addons-735249
	
	
	==> coredns [65a4f5122790db40976f411b62d30a3306fa3c8ec2780a5678b49e4fecdefe14] <==
	[INFO] 10.244.0.7:34423 - 32493 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000100142s
	[INFO] 10.244.0.7:34423 - 55622 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084855s
	[INFO] 10.244.0.7:34423 - 44043 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000103022s
	[INFO] 10.244.0.7:34423 - 53907 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000066089s
	[INFO] 10.244.0.7:34423 - 19699 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000138579s
	[INFO] 10.244.0.7:34423 - 39411 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000214662s
	[INFO] 10.244.0.7:34423 - 35366 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000116793s
	[INFO] 10.244.0.7:56947 - 54386 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000119868s
	[INFO] 10.244.0.7:56947 - 54101 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000099405s
	[INFO] 10.244.0.7:44299 - 36263 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117118s
	[INFO] 10.244.0.7:44299 - 36026 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075798s
	[INFO] 10.244.0.7:37226 - 21674 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126471s
	[INFO] 10.244.0.7:37226 - 21467 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00022879s
	[INFO] 10.244.0.7:47180 - 5437 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000128638s
	[INFO] 10.244.0.7:47180 - 5252 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112383s
	[INFO] 10.244.0.23:57652 - 33401 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000404343s
	[INFO] 10.244.0.23:50918 - 20540 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000400607s
	[INFO] 10.244.0.23:44184 - 40010 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115425s
	[INFO] 10.244.0.23:49654 - 40774 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130914s
	[INFO] 10.244.0.23:35358 - 37466 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000153813s
	[INFO] 10.244.0.23:45892 - 37069 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000153207s
	[INFO] 10.244.0.23:34018 - 40487 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.003964998s
	[INFO] 10.244.0.23:58272 - 39058 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004107941s
	[INFO] 10.244.0.26:36113 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000314959s
	[INFO] 10.244.0.26:36023 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134802s
	
	
	==> describe nodes <==
	Name:               addons-735249
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-735249
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=addons-735249
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T12_56_15_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-735249
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:56:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-735249
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:00:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 12:58:59 +0000   Mon, 07 Apr 2025 12:56:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 12:58:59 +0000   Mon, 07 Apr 2025 12:56:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 12:58:59 +0000   Mon, 07 Apr 2025 12:56:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 12:58:59 +0000   Mon, 07 Apr 2025 12:56:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    addons-735249
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd242151d1ce4de79d247d489e1ff549
	  System UUID:                fd242151-d1ce-4de7-9d24-7d489e1ff549
	  Boot ID:                    7e925d46-9443-4a69-a4fa-6972bb9cf01d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  default                     hello-world-app-7d9564db4-4gm5b              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-dfr6p    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m25s
	  kube-system                 amd-gpu-device-plugin-l47tr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 coredns-668d6bf9bc-fdfkd                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m33s
	  kube-system                 etcd-addons-735249                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m38s
	  kube-system                 kube-apiserver-addons-735249                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-controller-manager-addons-735249        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-proxy-q9nxs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-scheduler-addons-735249                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  local-path-storage          local-path-provisioner-76f89f99b5-tkfxp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m30s  kube-proxy       
	  Normal  Starting                 4m38s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m38s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m38s  kubelet          Node addons-735249 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s  kubelet          Node addons-735249 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s  kubelet          Node addons-735249 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m37s  kubelet          Node addons-735249 status is now: NodeReady
	  Normal  RegisteredNode           4m35s  node-controller  Node addons-735249 event: Registered Node addons-735249 in Controller
	
	
	==> dmesg <==
	[  +0.093914] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.274635] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.139529] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.154732] kauditd_printk_skb: 99 callbacks suppressed
	[  +5.052145] kauditd_printk_skb: 145 callbacks suppressed
	[  +5.868197] kauditd_printk_skb: 85 callbacks suppressed
	[ +15.087352] kauditd_printk_skb: 5 callbacks suppressed
	[Apr 7 12:57] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.490848] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.020209] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.779232] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.486425] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.262885] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.963780] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.119835] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.862996] kauditd_printk_skb: 7 callbacks suppressed
	[Apr 7 12:58] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.054628] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.146466] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.380140] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.348298] kauditd_printk_skb: 61 callbacks suppressed
	[  +6.654733] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.374877] kauditd_printk_skb: 11 callbacks suppressed
	[Apr 7 12:59] kauditd_printk_skb: 7 callbacks suppressed
	[Apr 7 13:00] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [1ab2c27bf583f8008518612f2ffabd5b8d5f8e03328ab525547ecb634eef5605] <==
	{"level":"info","ts":"2025-04-07T12:58:37.345165Z","caller":"traceutil/trace.go:171","msg":"trace[1158751769] transaction","detail":"{read_only:false; response_revision:1506; number_of_response:1; }","duration":"148.157085ms","start":"2025-04-07T12:58:37.196977Z","end":"2025-04-07T12:58:37.345134Z","steps":["trace[1158751769] 'process raft request'  (duration: 148.056728ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:58:37.572866Z","caller":"traceutil/trace.go:171","msg":"trace[872477576] linearizableReadLoop","detail":"{readStateIndex:1562; appliedIndex:1561; }","duration":"306.207233ms","start":"2025-04-07T12:58:37.266640Z","end":"2025-04-07T12:58:37.572847Z","steps":["trace[872477576] 'read index received'  (duration: 79.097512ms)","trace[872477576] 'applied index is now lower than readState.Index'  (duration: 227.108704ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-07T12:58:37.573055Z","caller":"traceutil/trace.go:171","msg":"trace[918119209] transaction","detail":"{read_only:false; response_revision:1507; number_of_response:1; }","duration":"335.537969ms","start":"2025-04-07T12:58:37.237508Z","end":"2025-04-07T12:58:37.573046Z","steps":["trace[918119209] 'process raft request'  (duration: 335.254315ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:58:37.573324Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T12:58:37.237489Z","time spent":"335.58718ms","remote":"127.0.0.1:50404","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-wpddtcocbohsepwt5rezd5ekwq\" mod_revision:1407 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-wpddtcocbohsepwt5rezd5ekwq\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-wpddtcocbohsepwt5rezd5ekwq\" > >"}
	{"level":"warn","ts":"2025-04-07T12:58:37.573512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"306.868344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-04-07T12:58:37.573537Z","caller":"traceutil/trace.go:171","msg":"trace[21966114] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1507; }","duration":"306.913296ms","start":"2025-04-07T12:58:37.266616Z","end":"2025-04-07T12:58:37.573529Z","steps":["trace[21966114] 'agreement among raft nodes before linearized reading'  (duration: 306.831489ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:58:37.573553Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T12:58:37.266603Z","time spent":"306.946645ms","remote":"127.0.0.1:50404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":1,"response size":520,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 "}
	{"level":"warn","ts":"2025-04-07T12:58:37.573652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"281.143881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:58:37.573665Z","caller":"traceutil/trace.go:171","msg":"trace[886588479] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1507; }","duration":"281.179056ms","start":"2025-04-07T12:58:37.292482Z","end":"2025-04-07T12:58:37.573661Z","steps":["trace[886588479] 'agreement among raft nodes before linearized reading'  (duration: 281.151828ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:58:37.573909Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.109488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:58:37.573936Z","caller":"traceutil/trace.go:171","msg":"trace[538446585] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1507; }","duration":"172.14875ms","start":"2025-04-07T12:58:37.401773Z","end":"2025-04-07T12:58:37.573921Z","steps":["trace[538446585] 'agreement among raft nodes before linearized reading'  (duration: 172.118037ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:58:37.574015Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.563507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/headlamp/headlamp\" limit:1 ","response":"range_response_count:1 size:1540"}
	{"level":"info","ts":"2025-04-07T12:58:37.574027Z","caller":"traceutil/trace.go:171","msg":"trace[1787342181] range","detail":"{range_begin:/registry/services/specs/headlamp/headlamp; range_end:; response_count:1; response_revision:1507; }","duration":"224.595396ms","start":"2025-04-07T12:58:37.349428Z","end":"2025-04-07T12:58:37.574023Z","steps":["trace[1787342181] 'agreement among raft nodes before linearized reading'  (duration: 224.547339ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:58:39.635109Z","caller":"traceutil/trace.go:171","msg":"trace[1979116390] transaction","detail":"{read_only:false; response_revision:1519; number_of_response:1; }","duration":"252.615803ms","start":"2025-04-07T12:58:39.382477Z","end":"2025-04-07T12:58:39.635093Z","steps":["trace[1979116390] 'process raft request'  (duration: 252.513038ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:58:39.635325Z","caller":"traceutil/trace.go:171","msg":"trace[2027681655] linearizableReadLoop","detail":"{readStateIndex:1575; appliedIndex:1575; }","duration":"233.397664ms","start":"2025-04-07T12:58:39.401911Z","end":"2025-04-07T12:58:39.635309Z","steps":["trace[2027681655] 'read index received'  (duration: 233.390993ms)","trace[2027681655] 'applied index is now lower than readState.Index'  (duration: 5.452µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T12:58:39.635441Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.512137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:58:39.635470Z","caller":"traceutil/trace.go:171","msg":"trace[2089372453] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1519; }","duration":"233.576588ms","start":"2025-04-07T12:58:39.401886Z","end":"2025-04-07T12:58:39.635463Z","steps":["trace[2089372453] 'agreement among raft nodes before linearized reading'  (duration: 233.507276ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:58:39.846823Z","caller":"traceutil/trace.go:171","msg":"trace[1848577439] linearizableReadLoop","detail":"{readStateIndex:1576; appliedIndex:1575; }","duration":"211.428666ms","start":"2025-04-07T12:58:39.635375Z","end":"2025-04-07T12:58:39.846804Z","steps":["trace[1848577439] 'read index received'  (duration: 201.414236ms)","trace[1848577439] 'applied index is now lower than readState.Index'  (duration: 10.012351ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T12:58:39.846999Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"354.59597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-07T12:58:39.847039Z","caller":"traceutil/trace.go:171","msg":"trace[2073796853] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1519; }","duration":"354.681616ms","start":"2025-04-07T12:58:39.492351Z","end":"2025-04-07T12:58:39.847033Z","steps":["trace[2073796853] 'agreement among raft nodes before linearized reading'  (duration: 354.595284ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:58:39.847462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.989286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/nginx\" limit:1 ","response":"range_response_count:1 size:620"}
	{"level":"info","ts":"2025-04-07T12:58:39.847511Z","caller":"traceutil/trace.go:171","msg":"trace[1072634291] range","detail":"{range_begin:/registry/services/specs/default/nginx; range_end:; response_count:1; response_revision:1519; }","duration":"207.062615ms","start":"2025-04-07T12:58:39.640440Z","end":"2025-04-07T12:58:39.847503Z","steps":["trace[1072634291] 'agreement among raft nodes before linearized reading'  (duration: 206.941277ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:58:39.847732Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.948424ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T12:58:39.847751Z","caller":"traceutil/trace.go:171","msg":"trace[1024460349] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1519; }","duration":"182.96889ms","start":"2025-04-07T12:58:39.664775Z","end":"2025-04-07T12:58:39.847744Z","steps":["trace[1024460349] 'agreement among raft nodes before linearized reading'  (duration: 182.942536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T12:58:39.847084Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T12:58:39.492332Z","time spent":"354.745596ms","remote":"127.0.0.1:57032","response type":"/etcdserverpb.KV/Range","request count":0,"request size":118,"response count":1,"response size":29,"request content":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true "}
	
	
	==> kernel <==
	 13:00:53 up 5 min,  0 users,  load average: 0.72, 1.24, 0.63
	Linux addons-735249 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7bddb09b897171b09d9e894e522216500be8b0f45ca7842b433a50e1a5aefed4] <==
	E0407 12:57:05.979321       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.12.22:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.12.22:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.12.22:443: connect: connection refused" logger="UnhandledError"
	I0407 12:57:06.065779       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0407 12:57:58.557541       1 conn.go:339] Error on socket receive: read tcp 192.168.39.136:8443->192.168.39.1:52756: use of closed network connection
	E0407 12:57:58.745454       1 conn.go:339] Error on socket receive: read tcp 192.168.39.136:8443->192.168.39.1:52790: use of closed network connection
	I0407 12:58:20.028375       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.239.3"}
	I0407 12:58:29.178758       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0407 12:58:29.393955       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.159.109"}
	I0407 12:58:30.377583       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0407 12:58:31.415435       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0407 12:58:47.357157       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0407 12:59:06.994304       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0407 12:59:08.525285       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:59:08.525341       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 12:59:08.561807       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:59:08.561874       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 12:59:08.587751       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:59:08.587948       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 12:59:08.627734       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:59:08.627787       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 12:59:08.675902       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:59:08.675947       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0407 12:59:09.628569       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0407 12:59:09.676567       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0407 12:59:09.731043       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0407 13:00:51.631638       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.156.75"}
	
	
	==> kube-controller-manager [26d7d787b4b8497b30afc09be2156d463a630ed8def0b79fb98e886b00fee661] <==
	E0407 12:59:44.695179       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 12:59:57.656986       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 12:59:57.657950       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0407 12:59:57.658903       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 12:59:57.658947       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:00:14.576337       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:00:14.577365       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0407 13:00:14.578222       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:00:14.578392       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:00:14.731036       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:00:14.732372       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0407 13:00:14.733201       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:00:14.733365       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 13:00:21.780597       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:00:21.781660       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0407 13:00:21.782551       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:00:21.782628       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0407 13:00:51.433995       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="33.466534ms"
	I0407 13:00:51.453784       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="19.718935ms"
	I0407 13:00:51.454054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="94.004µs"
	I0407 13:00:51.454195       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="24.634µs"
	W0407 13:00:53.198169       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 13:00:53.199446       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0407 13:00:53.200677       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 13:00:53.200805       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [de6345fef4d981b6c4ebf43f0a11fefdb21a933c43b6fe5a80ff4e29ba725336] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 12:56:21.941335       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 12:56:21.951610       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.136"]
	E0407 12:56:21.951670       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 12:56:22.046154       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 12:56:22.046286       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 12:56:22.046320       1 server_linux.go:170] "Using iptables Proxier"
	I0407 12:56:22.058107       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 12:56:22.058440       1 server.go:497] "Version info" version="v1.32.2"
	I0407 12:56:22.058452       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:56:22.061056       1 config.go:199] "Starting service config controller"
	I0407 12:56:22.061067       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 12:56:22.061088       1 config.go:105] "Starting endpoint slice config controller"
	I0407 12:56:22.061091       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 12:56:22.061479       1 config.go:329] "Starting node config controller"
	I0407 12:56:22.061485       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 12:56:22.163525       1 shared_informer.go:320] Caches are synced for node config
	I0407 12:56:22.163538       1 shared_informer.go:320] Caches are synced for service config
	I0407 12:56:22.163554       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [aa530338ac611679582693b0776a88d12a32f8038ae1cd45be9f75d6fdcb84b7] <==
	W0407 12:56:12.419988       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 12:56:12.420179       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:56:12.420015       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0407 12:56:12.420399       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:56:12.420049       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0407 12:56:12.420479       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0407 12:56:13.234883       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0407 12:56:13.234916       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:56:13.294057       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0407 12:56:13.294133       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:56:13.310815       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0407 12:56:13.310930       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:56:13.324893       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 12:56:13.325622       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:56:13.363705       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 12:56:13.363822       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:56:13.476924       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 12:56:13.477041       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:56:13.490769       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 12:56:13.490854       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:56:13.596662       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 12:56:13.596711       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0407 12:56:13.613424       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0407 12:56:13.613762       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0407 12:56:16.811731       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 07 13:00:15 addons-735249 kubelet[1233]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 13:00:15 addons-735249 kubelet[1233]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 13:00:15 addons-735249 kubelet[1233]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 13:00:15 addons-735249 kubelet[1233]: E0407 13:00:15.297478    1233 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030815297002061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595373,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 13:00:15 addons-735249 kubelet[1233]: E0407 13:00:15.297737    1233 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030815297002061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595373,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 13:00:25 addons-735249 kubelet[1233]: E0407 13:00:25.300785    1233 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030825300350248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595373,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 13:00:25 addons-735249 kubelet[1233]: E0407 13:00:25.300824    1233 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030825300350248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595373,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 13:00:30 addons-735249 kubelet[1233]: I0407 13:00:30.089768    1233 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-l47tr" secret="" err="secret \"gcp-auth\" not found"
	Apr 07 13:00:35 addons-735249 kubelet[1233]: E0407 13:00:35.303526    1233 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030835303123297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595373,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 13:00:35 addons-735249 kubelet[1233]: E0407 13:00:35.304898    1233 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030835303123297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595373,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 13:00:39 addons-735249 kubelet[1233]: I0407 13:00:39.089853    1233 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 07 13:00:45 addons-735249 kubelet[1233]: E0407 13:00:45.308937    1233 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030845308591924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595373,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 13:00:45 addons-735249 kubelet[1233]: E0407 13:00:45.309332    1233 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744030845308591924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595373,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 13:00:51 addons-735249 kubelet[1233]: I0407 13:00:51.428094    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="3ac39222-fb30-49a1-8c89-54c90089d148" containerName="csi-resizer"
	Apr 07 13:00:51 addons-735249 kubelet[1233]: I0407 13:00:51.428125    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="ed28bf91-8cc2-4a8b-bcbb-17b535a4b548" containerName="node-driver-registrar"
	Apr 07 13:00:51 addons-735249 kubelet[1233]: I0407 13:00:51.428132    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="ed28bf91-8cc2-4a8b-bcbb-17b535a4b548" containerName="liveness-probe"
	Apr 07 13:00:51 addons-735249 kubelet[1233]: I0407 13:00:51.428137    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="fe7412af-609d-44a4-b87c-1e43f862ba4d" containerName="csi-attacher"
	Apr 07 13:00:51 addons-735249 kubelet[1233]: I0407 13:00:51.428142    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="ed28bf91-8cc2-4a8b-bcbb-17b535a4b548" containerName="csi-snapshotter"
	Apr 07 13:00:51 addons-735249 kubelet[1233]: I0407 13:00:51.428148    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="df121e3e-c205-4146-976d-035d5d1bad75" containerName="volume-snapshot-controller"
	Apr 07 13:00:51 addons-735249 kubelet[1233]: I0407 13:00:51.428152    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="ed28bf91-8cc2-4a8b-bcbb-17b535a4b548" containerName="csi-external-health-monitor-controller"
	Apr 07 13:00:51 addons-735249 kubelet[1233]: I0407 13:00:51.428157    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="ed28bf91-8cc2-4a8b-bcbb-17b535a4b548" containerName="hostpath"
	Apr 07 13:00:51 addons-735249 kubelet[1233]: I0407 13:00:51.428163    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="65ce741a-0065-43c1-945c-5c751d4ce0c6" containerName="volume-snapshot-controller"
	Apr 07 13:00:51 addons-735249 kubelet[1233]: I0407 13:00:51.428168    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="ed28bf91-8cc2-4a8b-bcbb-17b535a4b548" containerName="csi-provisioner"
	Apr 07 13:00:51 addons-735249 kubelet[1233]: I0407 13:00:51.428179    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="45718bf9-906a-48d1-b30d-97942120a1db" containerName="task-pv-container"
	Apr 07 13:00:51 addons-735249 kubelet[1233]: I0407 13:00:51.511163    1233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x29wd\" (UniqueName: \"kubernetes.io/projected/cd74347f-93bb-4f50-95ac-afd6e52f0a01-kube-api-access-x29wd\") pod \"hello-world-app-7d9564db4-4gm5b\" (UID: \"cd74347f-93bb-4f50-95ac-afd6e52f0a01\") " pod="default/hello-world-app-7d9564db4-4gm5b"
	
	
	==> storage-provisioner [0208ac4c28a8f98f739940033c91926d06c2492a2b02a01431bdf23c11d3c5ab] <==
	I0407 12:56:27.101708       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:56:27.446464       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:56:27.446535       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 12:56:27.694918       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 12:56:27.695143       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-735249_10b5ded3-9807-44e9-b465-94db040f4ec7!
	I0407 12:56:27.696200       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c00b6aae-4f94-4ad5-a524-cc21e36f4698", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-735249_10b5ded3-9807-44e9-b465-94db040f4ec7 became leader
	I0407 12:56:27.796516       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-735249_10b5ded3-9807-44e9-b465-94db040f4ec7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-735249 -n addons-735249
helpers_test.go:261: (dbg) Run:  kubectl --context addons-735249 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-4gm5b ingress-nginx-admission-create-7jnsv ingress-nginx-admission-patch-frvwj
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-735249 describe pod hello-world-app-7d9564db4-4gm5b ingress-nginx-admission-create-7jnsv ingress-nginx-admission-patch-frvwj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-735249 describe pod hello-world-app-7d9564db4-4gm5b ingress-nginx-admission-create-7jnsv ingress-nginx-admission-patch-frvwj: exit status 1 (75.4434ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-4gm5b
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-735249/192.168.39.136
	Start Time:       Mon, 07 Apr 2025 13:00:51 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x29wd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x29wd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-4gm5b to addons-735249
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7jnsv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-frvwj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-735249 describe pod hello-world-app-7d9564db4-4gm5b ingress-nginx-admission-create-7jnsv ingress-nginx-admission-patch-frvwj: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-735249 addons disable ingress-dns --alsologtostderr -v=1: (1.312895628s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-735249 addons disable ingress --alsologtostderr -v=1: (7.71559921s)
--- FAIL: TestAddons/parallel/Ingress (154.15s)

                                                
                                    
x
+
TestPreload (168.27s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-673837 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0407 13:51:00.588799  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-673837 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m33.936338408s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-673837 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-673837 image pull gcr.io/k8s-minikube/busybox: (4.485124346s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-673837
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-673837: (7.315957555s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-673837 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0407 13:52:30.514892  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:52:47.434391  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-673837 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (59.342818131s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-673837 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-04-07 13:52:47.796707096 +0000 UTC m=+3451.188161264
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-673837 -n test-preload-673837
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-673837 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-673837 logs -n 25: (1.098955241s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-054683 ssh -n                                                                 | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:37 UTC | 07 Apr 25 13:37 UTC |
	|         | multinode-054683-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-054683 ssh -n multinode-054683 sudo cat                                       | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:37 UTC | 07 Apr 25 13:37 UTC |
	|         | /home/docker/cp-test_multinode-054683-m03_multinode-054683.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-054683 cp multinode-054683-m03:/home/docker/cp-test.txt                       | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:37 UTC | 07 Apr 25 13:37 UTC |
	|         | multinode-054683-m02:/home/docker/cp-test_multinode-054683-m03_multinode-054683-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-054683 ssh -n                                                                 | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:37 UTC | 07 Apr 25 13:37 UTC |
	|         | multinode-054683-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-054683 ssh -n multinode-054683-m02 sudo cat                                   | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:37 UTC | 07 Apr 25 13:37 UTC |
	|         | /home/docker/cp-test_multinode-054683-m03_multinode-054683-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-054683 node stop m03                                                          | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:37 UTC | 07 Apr 25 13:37 UTC |
	| node    | multinode-054683 node start                                                             | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:37 UTC | 07 Apr 25 13:37 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-054683                                                                | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:37 UTC |                     |
	| stop    | -p multinode-054683                                                                     | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:37 UTC | 07 Apr 25 13:40 UTC |
	| start   | -p multinode-054683                                                                     | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:40 UTC | 07 Apr 25 13:43 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-054683                                                                | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC |                     |
	| node    | multinode-054683 node delete                                                            | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:43 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-054683 stop                                                                   | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:43 UTC | 07 Apr 25 13:46 UTC |
	| start   | -p multinode-054683                                                                     | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:46 UTC | 07 Apr 25 13:49 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-054683                                                                | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:49 UTC |                     |
	| start   | -p multinode-054683-m02                                                                 | multinode-054683-m02 | jenkins | v1.35.0 | 07 Apr 25 13:49 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-054683-m03                                                                 | multinode-054683-m03 | jenkins | v1.35.0 | 07 Apr 25 13:49 UTC | 07 Apr 25 13:49 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-054683                                                                 | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:49 UTC |                     |
	| delete  | -p multinode-054683-m03                                                                 | multinode-054683-m03 | jenkins | v1.35.0 | 07 Apr 25 13:49 UTC | 07 Apr 25 13:50 UTC |
	| delete  | -p multinode-054683                                                                     | multinode-054683     | jenkins | v1.35.0 | 07 Apr 25 13:50 UTC | 07 Apr 25 13:50 UTC |
	| start   | -p test-preload-673837                                                                  | test-preload-673837  | jenkins | v1.35.0 | 07 Apr 25 13:50 UTC | 07 Apr 25 13:51 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-673837 image pull                                                          | test-preload-673837  | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-673837                                                                  | test-preload-673837  | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
	| start   | -p test-preload-673837                                                                  | test-preload-673837  | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:52 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-673837 image list                                                          | test-preload-673837  | jenkins | v1.35.0 | 07 Apr 25 13:52 UTC | 07 Apr 25 13:52 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 13:51:48
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 13:51:48.269168  280880 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:51:48.269423  280880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:51:48.269433  280880 out.go:358] Setting ErrFile to fd 2...
	I0407 13:51:48.269437  280880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:51:48.269597  280880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 13:51:48.270108  280880 out.go:352] Setting JSON to false
	I0407 13:51:48.270950  280880 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":20055,"bootTime":1744013853,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:51:48.271067  280880 start.go:139] virtualization: kvm guest
	I0407 13:51:48.273380  280880 out.go:177] * [test-preload-673837] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:51:48.274963  280880 notify.go:220] Checking for updates...
	I0407 13:51:48.274968  280880 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:51:48.276745  280880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:51:48.278029  280880 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 13:51:48.279311  280880 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 13:51:48.280482  280880 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:51:48.281798  280880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:51:48.283419  280880 config.go:182] Loaded profile config "test-preload-673837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0407 13:51:48.283833  280880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:48.283914  280880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:48.299939  280880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40319
	I0407 13:51:48.300457  280880 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:48.301064  280880 main.go:141] libmachine: Using API Version  1
	I0407 13:51:48.301088  280880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:48.301523  280880 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:48.301741  280880 main.go:141] libmachine: (test-preload-673837) Calling .DriverName
	I0407 13:51:48.303868  280880 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0407 13:51:48.305183  280880 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:51:48.305625  280880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:48.305676  280880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:48.320936  280880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45551
	I0407 13:51:48.321392  280880 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:48.321792  280880 main.go:141] libmachine: Using API Version  1
	I0407 13:51:48.321810  280880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:48.322277  280880 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:48.322481  280880 main.go:141] libmachine: (test-preload-673837) Calling .DriverName
	I0407 13:51:48.358712  280880 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 13:51:48.359883  280880 start.go:297] selected driver: kvm2
	I0407 13:51:48.359899  280880 start.go:901] validating driver "kvm2" against &{Name:test-preload-673837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-673837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:51:48.359994  280880 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:51:48.360719  280880 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:51:48.360795  280880 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:51:48.376398  280880 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:51:48.376864  280880 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:51:48.376917  280880 cni.go:84] Creating CNI manager for ""
	I0407 13:51:48.376966  280880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:51:48.377016  280880 start.go:340] cluster config:
	{Name:test-preload-673837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-673837 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:51:48.377145  280880 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:51:48.378990  280880 out.go:177] * Starting "test-preload-673837" primary control-plane node in "test-preload-673837" cluster
	I0407 13:51:48.380317  280880 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0407 13:51:48.405558  280880 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0407 13:51:48.405589  280880 cache.go:56] Caching tarball of preloaded images
	I0407 13:51:48.405768  280880 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0407 13:51:48.407566  280880 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0407 13:51:48.408860  280880 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0407 13:51:48.438253  280880 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0407 13:51:53.474744  280880 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0407 13:51:53.474845  280880 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0407 13:51:54.328840  280880 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0407 13:51:54.328968  280880 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/config.json ...
	I0407 13:51:54.329207  280880 start.go:360] acquireMachinesLock for test-preload-673837: {Name:mkbc0d9211b04d7c322a45485d144adcd6ee59fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:51:54.329274  280880 start.go:364] duration metric: took 45.485µs to acquireMachinesLock for "test-preload-673837"
	I0407 13:51:54.329292  280880 start.go:96] Skipping create...Using existing machine configuration
	I0407 13:51:54.329298  280880 fix.go:54] fixHost starting: 
	I0407 13:51:54.329563  280880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:51:54.329600  280880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:51:54.344749  280880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43733
	I0407 13:51:54.345254  280880 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:51:54.345723  280880 main.go:141] libmachine: Using API Version  1
	I0407 13:51:54.345747  280880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:51:54.346137  280880 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:51:54.346471  280880 main.go:141] libmachine: (test-preload-673837) Calling .DriverName
	I0407 13:51:54.346638  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetState
	I0407 13:51:54.348330  280880 fix.go:112] recreateIfNeeded on test-preload-673837: state=Stopped err=<nil>
	I0407 13:51:54.348362  280880 main.go:141] libmachine: (test-preload-673837) Calling .DriverName
	W0407 13:51:54.348532  280880 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 13:51:54.351311  280880 out.go:177] * Restarting existing kvm2 VM for "test-preload-673837" ...
	I0407 13:51:54.352990  280880 main.go:141] libmachine: (test-preload-673837) Calling .Start
	I0407 13:51:54.353173  280880 main.go:141] libmachine: (test-preload-673837) starting domain...
	I0407 13:51:54.353196  280880 main.go:141] libmachine: (test-preload-673837) ensuring networks are active...
	I0407 13:51:54.354102  280880 main.go:141] libmachine: (test-preload-673837) Ensuring network default is active
	I0407 13:51:54.354566  280880 main.go:141] libmachine: (test-preload-673837) Ensuring network mk-test-preload-673837 is active
	I0407 13:51:54.355068  280880 main.go:141] libmachine: (test-preload-673837) getting domain XML...
	I0407 13:51:54.355907  280880 main.go:141] libmachine: (test-preload-673837) creating domain...
	I0407 13:51:55.570254  280880 main.go:141] libmachine: (test-preload-673837) waiting for IP...
	I0407 13:51:55.571091  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:51:55.571564  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:51:55.571647  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:51:55.571555  280949 retry.go:31] will retry after 304.896505ms: waiting for domain to come up
	I0407 13:51:55.878239  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:51:55.878697  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:51:55.878726  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:51:55.878655  280949 retry.go:31] will retry after 253.423772ms: waiting for domain to come up
	I0407 13:51:56.134344  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:51:56.134746  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:51:56.134771  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:51:56.134703  280949 retry.go:31] will retry after 419.506631ms: waiting for domain to come up
	I0407 13:51:56.556343  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:51:56.556727  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:51:56.556755  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:51:56.556691  280949 retry.go:31] will retry after 408.525993ms: waiting for domain to come up
	I0407 13:51:56.967187  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:51:56.967604  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:51:56.967652  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:51:56.967570  280949 retry.go:31] will retry after 686.454069ms: waiting for domain to come up
	I0407 13:51:57.655228  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:51:57.655585  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:51:57.655614  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:51:57.655557  280949 retry.go:31] will retry after 584.056495ms: waiting for domain to come up
	I0407 13:51:58.241533  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:51:58.241936  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:51:58.241973  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:51:58.241887  280949 retry.go:31] will retry after 828.00204ms: waiting for domain to come up
	I0407 13:51:59.072072  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:51:59.072492  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:51:59.072524  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:51:59.072452  280949 retry.go:31] will retry after 1.309239095s: waiting for domain to come up
	I0407 13:52:00.383439  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:00.383902  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:52:00.383927  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:52:00.383880  280949 retry.go:31] will retry after 1.391665955s: waiting for domain to come up
	I0407 13:52:01.776777  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:01.777250  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:52:01.777277  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:52:01.777229  280949 retry.go:31] will retry after 2.261271568s: waiting for domain to come up
	I0407 13:52:04.041947  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:04.042380  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:52:04.042425  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:52:04.042372  280949 retry.go:31] will retry after 2.546021824s: waiting for domain to come up
	I0407 13:52:06.591549  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:06.591860  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:52:06.591906  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:52:06.591852  280949 retry.go:31] will retry after 2.786474538s: waiting for domain to come up
	I0407 13:52:09.380530  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:09.381082  280880 main.go:141] libmachine: (test-preload-673837) DBG | unable to find current IP address of domain test-preload-673837 in network mk-test-preload-673837
	I0407 13:52:09.381112  280880 main.go:141] libmachine: (test-preload-673837) DBG | I0407 13:52:09.381025  280949 retry.go:31] will retry after 2.77735882s: waiting for domain to come up
	I0407 13:52:12.161947  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.162523  280880 main.go:141] libmachine: (test-preload-673837) found domain IP: 192.168.39.38
	I0407 13:52:12.162542  280880 main.go:141] libmachine: (test-preload-673837) reserving static IP address...
	I0407 13:52:12.162558  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has current primary IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.163211  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "test-preload-673837", mac: "52:54:00:bd:bf:f8", ip: "192.168.39.38"} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:12.163241  280880 main.go:141] libmachine: (test-preload-673837) reserved static IP address 192.168.39.38 for domain test-preload-673837
	I0407 13:52:12.163252  280880 main.go:141] libmachine: (test-preload-673837) DBG | skip adding static IP to network mk-test-preload-673837 - found existing host DHCP lease matching {name: "test-preload-673837", mac: "52:54:00:bd:bf:f8", ip: "192.168.39.38"}
	I0407 13:52:12.163264  280880 main.go:141] libmachine: (test-preload-673837) DBG | Getting to WaitForSSH function...
	I0407 13:52:12.163276  280880 main.go:141] libmachine: (test-preload-673837) waiting for SSH...
	I0407 13:52:12.165700  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.166032  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:12.166063  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.166182  280880 main.go:141] libmachine: (test-preload-673837) DBG | Using SSH client type: external
	I0407 13:52:12.166204  280880 main.go:141] libmachine: (test-preload-673837) DBG | Using SSH private key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/test-preload-673837/id_rsa (-rw-------)
	I0407 13:52:12.166255  280880 main.go:141] libmachine: (test-preload-673837) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20598-242355/.minikube/machines/test-preload-673837/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 13:52:12.166273  280880 main.go:141] libmachine: (test-preload-673837) DBG | About to run SSH command:
	I0407 13:52:12.166292  280880 main.go:141] libmachine: (test-preload-673837) DBG | exit 0
	I0407 13:52:12.292670  280880 main.go:141] libmachine: (test-preload-673837) DBG | SSH cmd err, output: <nil>: 
	I0407 13:52:12.293143  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetConfigRaw
	I0407 13:52:12.293784  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetIP
	I0407 13:52:12.296494  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.296827  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:12.296860  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.297136  280880 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/config.json ...
	I0407 13:52:12.297395  280880 machine.go:93] provisionDockerMachine start ...
	I0407 13:52:12.297422  280880 main.go:141] libmachine: (test-preload-673837) Calling .DriverName
	I0407 13:52:12.297733  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHHostname
	I0407 13:52:12.299983  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.300335  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:12.300364  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.300495  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHPort
	I0407 13:52:12.300687  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:12.300847  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:12.300952  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHUsername
	I0407 13:52:12.301127  280880 main.go:141] libmachine: Using SSH client type: native
	I0407 13:52:12.301424  280880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0407 13:52:12.301436  280880 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:52:12.413053  280880 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 13:52:12.413083  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetMachineName
	I0407 13:52:12.413361  280880 buildroot.go:166] provisioning hostname "test-preload-673837"
	I0407 13:52:12.413391  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetMachineName
	I0407 13:52:12.413590  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHHostname
	I0407 13:52:12.416499  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.416841  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:12.416871  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.416969  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHPort
	I0407 13:52:12.417159  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:12.417327  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:12.417569  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHUsername
	I0407 13:52:12.417793  280880 main.go:141] libmachine: Using SSH client type: native
	I0407 13:52:12.418065  280880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0407 13:52:12.418081  280880 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-673837 && echo "test-preload-673837" | sudo tee /etc/hostname
	I0407 13:52:12.543657  280880 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-673837
	
	I0407 13:52:12.543688  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHHostname
	I0407 13:52:12.546476  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.546898  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:12.546929  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.547120  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHPort
	I0407 13:52:12.547321  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:12.547480  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:12.547674  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHUsername
	I0407 13:52:12.547917  280880 main.go:141] libmachine: Using SSH client type: native
	I0407 13:52:12.548138  280880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0407 13:52:12.548155  280880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-673837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-673837/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-673837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:52:12.670243  280880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:52:12.670281  280880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20598-242355/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-242355/.minikube}
	I0407 13:52:12.670307  280880 buildroot.go:174] setting up certificates
	I0407 13:52:12.670324  280880 provision.go:84] configureAuth start
	I0407 13:52:12.670343  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetMachineName
	I0407 13:52:12.670613  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetIP
	I0407 13:52:12.673248  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.673687  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:12.673716  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.673790  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHHostname
	I0407 13:52:12.675768  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.676038  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:12.676084  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:12.676147  280880 provision.go:143] copyHostCerts
	I0407 13:52:12.676213  280880 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem, removing ...
	I0407 13:52:12.676234  280880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem
	I0407 13:52:12.676296  280880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem (1078 bytes)
	I0407 13:52:12.676416  280880 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem, removing ...
	I0407 13:52:12.676450  280880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem
	I0407 13:52:12.676489  280880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem (1123 bytes)
	I0407 13:52:12.676555  280880 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem, removing ...
	I0407 13:52:12.676562  280880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem
	I0407 13:52:12.676586  280880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem (1679 bytes)
	I0407 13:52:12.676635  280880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem org=jenkins.test-preload-673837 san=[127.0.0.1 192.168.39.38 localhost minikube test-preload-673837]
	I0407 13:52:13.114554  280880 provision.go:177] copyRemoteCerts
	I0407 13:52:13.114621  280880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:52:13.114648  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHHostname
	I0407 13:52:13.117879  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.118262  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:13.118287  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.118434  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHPort
	I0407 13:52:13.118650  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:13.118783  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHUsername
	I0407 13:52:13.118921  280880 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/test-preload-673837/id_rsa Username:docker}
	I0407 13:52:13.207250  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:52:13.231696  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0407 13:52:13.255817  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:52:13.281316  280880 provision.go:87] duration metric: took 610.973286ms to configureAuth
	I0407 13:52:13.281348  280880 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:52:13.281562  280880 config.go:182] Loaded profile config "test-preload-673837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0407 13:52:13.281654  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHHostname
	I0407 13:52:13.284819  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.285216  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:13.285248  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.285413  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHPort
	I0407 13:52:13.285620  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:13.285788  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:13.285939  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHUsername
	I0407 13:52:13.286164  280880 main.go:141] libmachine: Using SSH client type: native
	I0407 13:52:13.286439  280880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0407 13:52:13.286461  280880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:52:13.513083  280880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:52:13.513112  280880 machine.go:96] duration metric: took 1.215699797s to provisionDockerMachine
	I0407 13:52:13.513128  280880 start.go:293] postStartSetup for "test-preload-673837" (driver="kvm2")
	I0407 13:52:13.513143  280880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:52:13.513166  280880 main.go:141] libmachine: (test-preload-673837) Calling .DriverName
	I0407 13:52:13.513531  280880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:52:13.513571  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHHostname
	I0407 13:52:13.516299  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.516672  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:13.516695  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.516837  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHPort
	I0407 13:52:13.517044  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:13.517292  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHUsername
	I0407 13:52:13.517493  280880 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/test-preload-673837/id_rsa Username:docker}
	I0407 13:52:13.603844  280880 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:52:13.608480  280880 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:52:13.608505  280880 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/addons for local assets ...
	I0407 13:52:13.608577  280880 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/files for local assets ...
	I0407 13:52:13.608668  280880 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem -> 2495162.pem in /etc/ssl/certs
	I0407 13:52:13.608775  280880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:52:13.618417  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 13:52:13.646255  280880 start.go:296] duration metric: took 133.109092ms for postStartSetup
	I0407 13:52:13.646309  280880 fix.go:56] duration metric: took 19.317011711s for fixHost
	I0407 13:52:13.646332  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHHostname
	I0407 13:52:13.649325  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.649720  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:13.649751  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.649911  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHPort
	I0407 13:52:13.650119  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:13.650271  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:13.650388  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHUsername
	I0407 13:52:13.650537  280880 main.go:141] libmachine: Using SSH client type: native
	I0407 13:52:13.650758  280880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0407 13:52:13.650767  280880 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:52:13.761262  280880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744033933.733271506
	
	I0407 13:52:13.761291  280880 fix.go:216] guest clock: 1744033933.733271506
	I0407 13:52:13.761298  280880 fix.go:229] Guest: 2025-04-07 13:52:13.733271506 +0000 UTC Remote: 2025-04-07 13:52:13.646314075 +0000 UTC m=+25.413595339 (delta=86.957431ms)
	I0407 13:52:13.761318  280880 fix.go:200] guest clock delta is within tolerance: 86.957431ms
	I0407 13:52:13.761322  280880 start.go:83] releasing machines lock for "test-preload-673837", held for 19.432037734s
	I0407 13:52:13.761340  280880 main.go:141] libmachine: (test-preload-673837) Calling .DriverName
	I0407 13:52:13.761652  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetIP
	I0407 13:52:13.764499  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.764884  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:13.764918  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.765085  280880 main.go:141] libmachine: (test-preload-673837) Calling .DriverName
	I0407 13:52:13.765635  280880 main.go:141] libmachine: (test-preload-673837) Calling .DriverName
	I0407 13:52:13.765830  280880 main.go:141] libmachine: (test-preload-673837) Calling .DriverName
	I0407 13:52:13.765943  280880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:52:13.765985  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHHostname
	I0407 13:52:13.766025  280880 ssh_runner.go:195] Run: cat /version.json
	I0407 13:52:13.766044  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHHostname
	I0407 13:52:13.768662  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.768692  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.769043  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:13.769072  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.769097  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:13.769109  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:13.769240  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHPort
	I0407 13:52:13.769370  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHPort
	I0407 13:52:13.769507  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:13.769545  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:13.769669  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHUsername
	I0407 13:52:13.769686  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHUsername
	I0407 13:52:13.769855  280880 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/test-preload-673837/id_rsa Username:docker}
	I0407 13:52:13.769972  280880 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/test-preload-673837/id_rsa Username:docker}
	I0407 13:52:13.876828  280880 ssh_runner.go:195] Run: systemctl --version
	I0407 13:52:13.882901  280880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:52:14.024633  280880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:52:14.030879  280880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:52:14.030956  280880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:52:14.046935  280880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:52:14.046961  280880 start.go:495] detecting cgroup driver to use...
	I0407 13:52:14.047038  280880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:52:14.062209  280880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:52:14.075931  280880 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:52:14.075990  280880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:52:14.089784  280880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:52:14.104013  280880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:52:14.218732  280880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:52:14.359937  280880 docker.go:233] disabling docker service ...
	I0407 13:52:14.360030  280880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:52:14.375956  280880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:52:14.388785  280880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:52:14.516283  280880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:52:14.649087  280880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:52:14.662826  280880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:52:14.681688  280880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0407 13:52:14.681763  280880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:52:14.692578  280880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:52:14.692648  280880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:52:14.703366  280880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:52:14.714336  280880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:52:14.725774  280880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:52:14.736885  280880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:52:14.747960  280880 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:52:14.765194  280880 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:52:14.776348  280880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:52:14.787530  280880 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:52:14.787600  280880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:52:14.801625  280880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:52:14.811212  280880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:52:14.928083  280880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:52:15.021182  280880 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:52:15.021267  280880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:52:15.026157  280880 start.go:563] Will wait 60s for crictl version
	I0407 13:52:15.026213  280880 ssh_runner.go:195] Run: which crictl
	I0407 13:52:15.030098  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:52:15.070819  280880 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:52:15.070896  280880 ssh_runner.go:195] Run: crio --version
	I0407 13:52:15.100802  280880 ssh_runner.go:195] Run: crio --version
	I0407 13:52:15.131467  280880 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0407 13:52:15.132964  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetIP
	I0407 13:52:15.135565  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:15.135966  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:15.135999  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:15.136225  280880 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0407 13:52:15.140807  280880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:52:15.153794  280880 kubeadm.go:883] updating cluster {Name:test-preload-673837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-673837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:52:15.153945  280880 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0407 13:52:15.154002  280880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:52:15.192242  280880 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0407 13:52:15.192324  280880 ssh_runner.go:195] Run: which lz4
	I0407 13:52:15.196498  280880 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 13:52:15.200792  280880 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:52:15.200825  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0407 13:52:16.779852  280880 crio.go:462] duration metric: took 1.583393223s to copy over tarball
	I0407 13:52:16.779950  280880 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 13:52:19.207449  280880 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.427464515s)
	I0407 13:52:19.207477  280880 crio.go:469] duration metric: took 2.427583866s to extract the tarball
	I0407 13:52:19.207485  280880 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:52:19.249134  280880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:52:19.293664  280880 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0407 13:52:19.293690  280880 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0407 13:52:19.293749  280880 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:52:19.293769  280880 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:52:19.293801  280880 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:52:19.293816  280880 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:52:19.293830  280880 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:52:19.293853  280880 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0407 13:52:19.293897  280880 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0407 13:52:19.293908  280880 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:52:19.295197  280880 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0407 13:52:19.295202  280880 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:52:19.295214  280880 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:52:19.295217  280880 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:52:19.295219  280880 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0407 13:52:19.295196  280880 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:52:19.295222  280880 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:52:19.295262  280880 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:52:19.428784  280880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:52:19.438407  280880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:52:19.438803  280880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0407 13:52:19.440554  280880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0407 13:52:19.446596  280880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:52:19.472535  280880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:52:19.519327  280880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0407 13:52:19.519384  280880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:52:19.519462  280880 ssh_runner.go:195] Run: which crictl
	I0407 13:52:19.545471  280880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:52:19.563765  280880 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0407 13:52:19.563825  280880 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:52:19.563882  280880 ssh_runner.go:195] Run: which crictl
	I0407 13:52:19.591422  280880 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0407 13:52:19.591477  280880 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0407 13:52:19.591533  280880 ssh_runner.go:195] Run: which crictl
	I0407 13:52:19.603090  280880 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0407 13:52:19.603111  280880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0407 13:52:19.603149  280880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:52:19.603161  280880 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0407 13:52:19.603194  280880 ssh_runner.go:195] Run: which crictl
	I0407 13:52:19.603214  280880 ssh_runner.go:195] Run: which crictl
	I0407 13:52:19.624900  280880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0407 13:52:19.624954  280880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:52:19.624976  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:52:19.624999  280880 ssh_runner.go:195] Run: which crictl
	I0407 13:52:19.632355  280880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0407 13:52:19.632410  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:52:19.632435  280880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:52:19.632457  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0407 13:52:19.632475  280880 ssh_runner.go:195] Run: which crictl
	I0407 13:52:19.632496  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:52:19.632545  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0407 13:52:19.632565  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:52:19.748631  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:52:19.748920  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:52:19.760986  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0407 13:52:19.764696  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:52:19.764761  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:52:19.764827  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0407 13:52:19.764828  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:52:19.862137  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0407 13:52:19.875867  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0407 13:52:19.946894  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0407 13:52:19.947957  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:52:19.948964  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0407 13:52:19.949108  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0407 13:52:19.949124  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0407 13:52:19.965403  280880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0407 13:52:19.965522  280880 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0407 13:52:20.004127  280880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0407 13:52:20.004238  280880 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0407 13:52:20.049780  280880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0407 13:52:20.049924  280880 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0407 13:52:20.050802  280880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0407 13:52:20.080770  280880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0407 13:52:20.080884  280880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0407 13:52:20.080899  280880 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0407 13:52:20.080935  280880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0407 13:52:20.080958  280880 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0407 13:52:20.080985  280880 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0407 13:52:20.081003  280880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0407 13:52:20.081005  280880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0407 13:52:20.081079  280880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0407 13:52:20.081195  280880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0407 13:52:20.081281  280880 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0407 13:52:20.112778  280880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0407 13:52:20.112876  280880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0407 13:52:20.112899  280880 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0407 13:52:21.113210  280880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:52:22.855857  280880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.774829083s)
	I0407 13:52:22.855891  280880 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0407 13:52:22.855921  280880 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0407 13:52:22.855937  280880 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.774922096s)
	I0407 13:52:22.855983  280880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0407 13:52:22.855985  280880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0407 13:52:22.856022  280880 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.774711652s)
	I0407 13:52:22.856058  280880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0407 13:52:22.856069  280880 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.743156511s)
	I0407 13:52:22.856093  280880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0407 13:52:22.856099  280880 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.74286247s)
	I0407 13:52:23.599355  280880 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0407 13:52:23.599416  280880 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0407 13:52:23.599483  280880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0407 13:52:25.752352  280880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.152809374s)
	I0407 13:52:25.752389  280880 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0407 13:52:25.752437  280880 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0407 13:52:25.752496  280880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0407 13:52:25.897108  280880 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0407 13:52:25.897183  280880 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0407 13:52:25.897241  280880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0407 13:52:26.346656  280880 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0407 13:52:26.346709  280880 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0407 13:52:26.346754  280880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0407 13:52:26.686032  280880 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0407 13:52:26.686099  280880 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0407 13:52:26.686160  280880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0407 13:52:27.534425  280880 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0407 13:52:27.534483  280880 cache_images.go:123] Successfully loaded all cached images
	I0407 13:52:27.534490  280880 cache_images.go:92] duration metric: took 8.240786934s to LoadCachedImages
	I0407 13:52:27.534505  280880 kubeadm.go:934] updating node { 192.168.39.38 8443 v1.24.4 crio true true} ...
	I0407 13:52:27.534635  280880 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-673837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-673837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:52:27.534753  280880 ssh_runner.go:195] Run: crio config
	I0407 13:52:27.585563  280880 cni.go:84] Creating CNI manager for ""
	I0407 13:52:27.585584  280880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:52:27.585594  280880 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:52:27.585612  280880 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-673837 NodeName:test-preload-673837 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:52:27.585730  280880 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-673837"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:52:27.585794  280880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0407 13:52:27.596205  280880 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:52:27.596272  280880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:52:27.605774  280880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0407 13:52:27.623165  280880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:52:27.639878  280880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0407 13:52:27.656666  280880 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0407 13:52:27.660489  280880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:52:27.672819  280880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:52:27.809231  280880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:52:27.827320  280880 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837 for IP: 192.168.39.38
	I0407 13:52:27.827345  280880 certs.go:194] generating shared ca certs ...
	I0407 13:52:27.827366  280880 certs.go:226] acquiring lock for ca certs: {Name:mk1da0e2436b5b22d130d00c7c348c272ee34f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:52:27.827562  280880 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key
	I0407 13:52:27.827623  280880 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key
	I0407 13:52:27.827638  280880 certs.go:256] generating profile certs ...
	I0407 13:52:27.827726  280880 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/client.key
	I0407 13:52:27.827806  280880 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/apiserver.key.ff892724
	I0407 13:52:27.827873  280880 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/proxy-client.key
	I0407 13:52:27.828022  280880 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem (1338 bytes)
	W0407 13:52:27.828056  280880 certs.go:480] ignoring /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516_empty.pem, impossibly tiny 0 bytes
	I0407 13:52:27.828065  280880 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:52:27.828090  280880 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:52:27.828117  280880 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:52:27.828143  280880 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem (1679 bytes)
	I0407 13:52:27.828178  280880 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 13:52:27.828952  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:52:27.884320  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:52:27.920045  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:52:27.945488  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 13:52:27.970379  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0407 13:52:27.996443  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:52:28.021511  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:52:28.062738  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:52:28.086640  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:52:28.110401  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem --> /usr/share/ca-certificates/249516.pem (1338 bytes)
	I0407 13:52:28.133682  280880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /usr/share/ca-certificates/2495162.pem (1708 bytes)
	I0407 13:52:28.157342  280880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:52:28.174123  280880 ssh_runner.go:195] Run: openssl version
	I0407 13:52:28.179849  280880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:52:28.190439  280880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:52:28.195052  280880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:52:28.195119  280880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:52:28.200927  280880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:52:28.211441  280880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/249516.pem && ln -fs /usr/share/ca-certificates/249516.pem /etc/ssl/certs/249516.pem"
	I0407 13:52:28.222012  280880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/249516.pem
	I0407 13:52:28.226687  280880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 13:03 /usr/share/ca-certificates/249516.pem
	I0407 13:52:28.226733  280880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/249516.pem
	I0407 13:52:28.232494  280880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/249516.pem /etc/ssl/certs/51391683.0"
	I0407 13:52:28.242938  280880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2495162.pem && ln -fs /usr/share/ca-certificates/2495162.pem /etc/ssl/certs/2495162.pem"
	I0407 13:52:28.253256  280880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2495162.pem
	I0407 13:52:28.257615  280880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 13:03 /usr/share/ca-certificates/2495162.pem
	I0407 13:52:28.257656  280880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2495162.pem
	I0407 13:52:28.263110  280880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2495162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:52:28.273560  280880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:52:28.278147  280880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 13:52:28.283872  280880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 13:52:28.289467  280880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 13:52:28.295257  280880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 13:52:28.301003  280880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 13:52:28.306579  280880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 13:52:28.312183  280880 kubeadm.go:392] StartCluster: {Name:test-preload-673837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
673837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:52:28.312278  280880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:52:28.312316  280880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:52:28.359538  280880 cri.go:89] found id: ""
	I0407 13:52:28.359626  280880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:52:28.371994  280880 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 13:52:28.372019  280880 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 13:52:28.372080  280880 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 13:52:28.382020  280880 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 13:52:28.382484  280880 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-673837" does not appear in /home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 13:52:28.382618  280880 kubeconfig.go:62] /home/jenkins/minikube-integration/20598-242355/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-673837" cluster setting kubeconfig missing "test-preload-673837" context setting]
	I0407 13:52:28.382999  280880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/kubeconfig: {Name:mkef4208e7f217ec5ec7c15cd00232eac7047b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:52:28.383540  280880 kapi.go:59] client config for test-preload-673837: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/client.crt", KeyFile:"/home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/client.key", CAFile:"/home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 13:52:28.384056  280880 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0407 13:52:28.384079  280880 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0407 13:52:28.384085  280880 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0407 13:52:28.384091  280880 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0407 13:52:28.384387  280880 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 13:52:28.393757  280880 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.38
	I0407 13:52:28.393789  280880 kubeadm.go:1160] stopping kube-system containers ...
	I0407 13:52:28.393818  280880 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0407 13:52:28.393962  280880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:52:28.436712  280880 cri.go:89] found id: ""
	I0407 13:52:28.436796  280880 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0407 13:52:28.452495  280880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:52:28.462336  280880 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:52:28.462354  280880 kubeadm.go:157] found existing configuration files:
	
	I0407 13:52:28.462420  280880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:52:28.471651  280880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:52:28.471766  280880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:52:28.481869  280880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:52:28.491669  280880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:52:28.491724  280880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:52:28.501986  280880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:52:28.511574  280880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:52:28.511635  280880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:52:28.520986  280880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:52:28.529986  280880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:52:28.530033  280880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:52:28.540309  280880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:52:28.550488  280880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:52:28.652863  280880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:52:29.575319  280880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:52:29.834729  280880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:52:29.905143  280880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:52:29.993878  280880 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:52:29.993960  280880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:52:30.494385  280880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:52:30.994732  280880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:52:31.020088  280880 api_server.go:72] duration metric: took 1.026207016s to wait for apiserver process to appear ...
	I0407 13:52:31.020131  280880 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:52:31.020158  280880 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0407 13:52:31.020853  280880 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0407 13:52:31.520513  280880 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0407 13:52:31.521288  280880 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0407 13:52:32.021053  280880 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0407 13:52:34.971251  280880 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 13:52:34.971287  280880 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 13:52:34.971304  280880 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0407 13:52:35.088973  280880 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 13:52:35.089023  280880 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 13:52:35.089040  280880 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0407 13:52:35.103171  280880 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 13:52:35.103199  280880 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 13:52:35.520825  280880 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0407 13:52:35.526662  280880 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 13:52:35.526693  280880 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 13:52:36.020226  280880 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0407 13:52:36.025711  280880 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 13:52:36.025740  280880 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 13:52:36.520436  280880 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0407 13:52:36.532981  280880 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0407 13:52:36.539854  280880 api_server.go:141] control plane version: v1.24.4
	I0407 13:52:36.539884  280880 api_server.go:131] duration metric: took 5.519746252s to wait for apiserver health ...
	I0407 13:52:36.539893  280880 cni.go:84] Creating CNI manager for ""
	I0407 13:52:36.539899  280880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:52:36.541635  280880 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 13:52:36.542903  280880 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 13:52:36.554850  280880 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 13:52:36.574289  280880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:52:36.578232  280880 system_pods.go:59] 7 kube-system pods found
	I0407 13:52:36.578272  280880 system_pods.go:61] "coredns-6d4b75cb6d-qm55b" [38839847-4bea-458e-bae4-867c92ef468c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0407 13:52:36.578281  280880 system_pods.go:61] "etcd-test-preload-673837" [1f0d66df-ff1b-47cc-9b6f-31521ff394d2] Running
	I0407 13:52:36.578294  280880 system_pods.go:61] "kube-apiserver-test-preload-673837" [7dfd6dd8-a700-4b64-a5e8-419b8eab9992] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0407 13:52:36.578303  280880 system_pods.go:61] "kube-controller-manager-test-preload-673837" [a6fb4571-f685-4afd-be18-23fe80290287] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 13:52:36.578319  280880 system_pods.go:61] "kube-proxy-nwk94" [b343ff23-9775-4dd5-a00c-b34cd0699be8] Running
	I0407 13:52:36.578326  280880 system_pods.go:61] "kube-scheduler-test-preload-673837" [0b5edb74-99e0-40d4-ac56-0707b8d0f9fe] Running
	I0407 13:52:36.578337  280880 system_pods.go:61] "storage-provisioner" [947c51c7-ebc2-4381-8f6a-2404fdeb88c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0407 13:52:36.578348  280880 system_pods.go:74] duration metric: took 4.034676ms to wait for pod list to return data ...
	I0407 13:52:36.578362  280880 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:52:36.581083  280880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:52:36.581122  280880 node_conditions.go:123] node cpu capacity is 2
	I0407 13:52:36.581137  280880 node_conditions.go:105] duration metric: took 2.765756ms to run NodePressure ...
	I0407 13:52:36.581164  280880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 13:52:36.781447  280880 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0407 13:52:36.784691  280880 kubeadm.go:739] kubelet initialised
	I0407 13:52:36.784713  280880 kubeadm.go:740] duration metric: took 3.240534ms waiting for restarted kubelet to initialise ...
	I0407 13:52:36.784720  280880 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:52:36.788361  280880 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-qm55b" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:36.792166  280880 pod_ready.go:98] node "test-preload-673837" hosting pod "coredns-6d4b75cb6d-qm55b" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:36.792187  280880 pod_ready.go:82] duration metric: took 3.804588ms for pod "coredns-6d4b75cb6d-qm55b" in "kube-system" namespace to be "Ready" ...
	E0407 13:52:36.792194  280880 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-673837" hosting pod "coredns-6d4b75cb6d-qm55b" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:36.792200  280880 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:36.798013  280880 pod_ready.go:98] node "test-preload-673837" hosting pod "etcd-test-preload-673837" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:36.798034  280880 pod_ready.go:82] duration metric: took 5.824524ms for pod "etcd-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	E0407 13:52:36.798044  280880 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-673837" hosting pod "etcd-test-preload-673837" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:36.798049  280880 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:36.801626  280880 pod_ready.go:98] node "test-preload-673837" hosting pod "kube-apiserver-test-preload-673837" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:36.801655  280880 pod_ready.go:82] duration metric: took 3.591938ms for pod "kube-apiserver-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	E0407 13:52:36.801663  280880 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-673837" hosting pod "kube-apiserver-test-preload-673837" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:36.801669  280880 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:36.978584  280880 pod_ready.go:98] node "test-preload-673837" hosting pod "kube-controller-manager-test-preload-673837" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:36.978611  280880 pod_ready.go:82] duration metric: took 176.933929ms for pod "kube-controller-manager-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	E0407 13:52:36.978621  280880 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-673837" hosting pod "kube-controller-manager-test-preload-673837" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:36.978628  280880 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nwk94" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:37.385215  280880 pod_ready.go:98] node "test-preload-673837" hosting pod "kube-proxy-nwk94" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:37.385247  280880 pod_ready.go:82] duration metric: took 406.608625ms for pod "kube-proxy-nwk94" in "kube-system" namespace to be "Ready" ...
	E0407 13:52:37.385262  280880 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-673837" hosting pod "kube-proxy-nwk94" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:37.385270  280880 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:37.778182  280880 pod_ready.go:98] node "test-preload-673837" hosting pod "kube-scheduler-test-preload-673837" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:37.778217  280880 pod_ready.go:82] duration metric: took 392.938925ms for pod "kube-scheduler-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	E0407 13:52:37.778234  280880 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-673837" hosting pod "kube-scheduler-test-preload-673837" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:37.778246  280880 pod_ready.go:39] duration metric: took 993.510261ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:52:37.778269  280880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 13:52:37.790601  280880 ops.go:34] apiserver oom_adj: -16
	I0407 13:52:37.790628  280880 kubeadm.go:597] duration metric: took 9.418602016s to restartPrimaryControlPlane
	I0407 13:52:37.790651  280880 kubeadm.go:394] duration metric: took 9.478482033s to StartCluster
	I0407 13:52:37.790671  280880 settings.go:142] acquiring lock: {Name:mk4f0a46db7c57f47f856bd845390df879e08200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:52:37.790744  280880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 13:52:37.791370  280880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/kubeconfig: {Name:mkef4208e7f217ec5ec7c15cd00232eac7047b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:52:37.791635  280880 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 13:52:37.791700  280880 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 13:52:37.791821  280880 addons.go:69] Setting storage-provisioner=true in profile "test-preload-673837"
	I0407 13:52:37.791839  280880 addons.go:238] Setting addon storage-provisioner=true in "test-preload-673837"
	W0407 13:52:37.791848  280880 addons.go:247] addon storage-provisioner should already be in state true
	I0407 13:52:37.791853  280880 addons.go:69] Setting default-storageclass=true in profile "test-preload-673837"
	I0407 13:52:37.791872  280880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-673837"
	I0407 13:52:37.791881  280880 host.go:66] Checking if "test-preload-673837" exists ...
	I0407 13:52:37.791935  280880 config.go:182] Loaded profile config "test-preload-673837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0407 13:52:37.792311  280880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:52:37.792311  280880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:52:37.792354  280880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:52:37.792360  280880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:52:37.793500  280880 out.go:177] * Verifying Kubernetes components...
	I0407 13:52:37.794980  280880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:52:37.808009  280880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44371
	I0407 13:52:37.808118  280880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36717
	I0407 13:52:37.808542  280880 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:52:37.808668  280880 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:52:37.809041  280880 main.go:141] libmachine: Using API Version  1
	I0407 13:52:37.809057  280880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:52:37.809180  280880 main.go:141] libmachine: Using API Version  1
	I0407 13:52:37.809202  280880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:52:37.809422  280880 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:52:37.809624  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetState
	I0407 13:52:37.809668  280880 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:52:37.810331  280880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:52:37.810401  280880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:52:37.812075  280880 kapi.go:59] client config for test-preload-673837: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/client.crt", KeyFile:"/home/jenkins/minikube-integration/20598-242355/.minikube/profiles/test-preload-673837/client.key", CAFile:"/home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 13:52:37.812453  280880 addons.go:238] Setting addon default-storageclass=true in "test-preload-673837"
	W0407 13:52:37.812476  280880 addons.go:247] addon default-storageclass should already be in state true
	I0407 13:52:37.812505  280880 host.go:66] Checking if "test-preload-673837" exists ...
	I0407 13:52:37.812871  280880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:52:37.812915  280880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:52:37.826414  280880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35353
	I0407 13:52:37.826864  280880 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:52:37.827355  280880 main.go:141] libmachine: Using API Version  1
	I0407 13:52:37.827380  280880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:52:37.827728  280880 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:52:37.827958  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetState
	I0407 13:52:37.829057  280880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45305
	I0407 13:52:37.829580  280880 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:52:37.829816  280880 main.go:141] libmachine: (test-preload-673837) Calling .DriverName
	I0407 13:52:37.830069  280880 main.go:141] libmachine: Using API Version  1
	I0407 13:52:37.830086  280880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:52:37.830473  280880 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:52:37.831261  280880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:52:37.831338  280880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:52:37.831693  280880 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:52:37.833145  280880 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:52:37.833168  280880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 13:52:37.833187  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHHostname
	I0407 13:52:37.836266  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:37.836742  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:37.836771  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:37.836929  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHPort
	I0407 13:52:37.837171  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:37.837344  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHUsername
	I0407 13:52:37.837484  280880 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/test-preload-673837/id_rsa Username:docker}
	I0407 13:52:37.868710  280880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36655
	I0407 13:52:37.869270  280880 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:52:37.869728  280880 main.go:141] libmachine: Using API Version  1
	I0407 13:52:37.869761  280880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:52:37.870181  280880 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:52:37.870384  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetState
	I0407 13:52:37.872014  280880 main.go:141] libmachine: (test-preload-673837) Calling .DriverName
	I0407 13:52:37.872261  280880 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 13:52:37.872280  280880 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 13:52:37.872299  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHHostname
	I0407 13:52:37.875229  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:37.875757  280880 main.go:141] libmachine: (test-preload-673837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:bf:f8", ip: ""} in network mk-test-preload-673837: {Iface:virbr1 ExpiryTime:2025-04-07 14:52:06 +0000 UTC Type:0 Mac:52:54:00:bd:bf:f8 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:test-preload-673837 Clientid:01:52:54:00:bd:bf:f8}
	I0407 13:52:37.875783  280880 main.go:141] libmachine: (test-preload-673837) DBG | domain test-preload-673837 has defined IP address 192.168.39.38 and MAC address 52:54:00:bd:bf:f8 in network mk-test-preload-673837
	I0407 13:52:37.876004  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHPort
	I0407 13:52:37.876203  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHKeyPath
	I0407 13:52:37.876409  280880 main.go:141] libmachine: (test-preload-673837) Calling .GetSSHUsername
	I0407 13:52:37.876594  280880 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/test-preload-673837/id_rsa Username:docker}
	I0407 13:52:37.961222  280880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:52:37.981104  280880 node_ready.go:35] waiting up to 6m0s for node "test-preload-673837" to be "Ready" ...
	I0407 13:52:38.079061  280880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:52:38.105272  280880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:52:39.115766  280880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.01045405s)
	I0407 13:52:39.115833  280880 main.go:141] libmachine: Making call to close driver server
	I0407 13:52:39.115845  280880 main.go:141] libmachine: (test-preload-673837) Calling .Close
	I0407 13:52:39.115843  280880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.036742803s)
	I0407 13:52:39.115897  280880 main.go:141] libmachine: Making call to close driver server
	I0407 13:52:39.115918  280880 main.go:141] libmachine: (test-preload-673837) Calling .Close
	I0407 13:52:39.116267  280880 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:52:39.116276  280880 main.go:141] libmachine: (test-preload-673837) DBG | Closing plugin on server side
	I0407 13:52:39.116274  280880 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:52:39.116289  280880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:52:39.116295  280880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:52:39.116302  280880 main.go:141] libmachine: Making call to close driver server
	I0407 13:52:39.116309  280880 main.go:141] libmachine: (test-preload-673837) DBG | Closing plugin on server side
	I0407 13:52:39.116312  280880 main.go:141] libmachine: (test-preload-673837) Calling .Close
	I0407 13:52:39.116303  280880 main.go:141] libmachine: Making call to close driver server
	I0407 13:52:39.116452  280880 main.go:141] libmachine: (test-preload-673837) Calling .Close
	I0407 13:52:39.116614  280880 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:52:39.116630  280880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:52:39.116645  280880 main.go:141] libmachine: (test-preload-673837) DBG | Closing plugin on server side
	I0407 13:52:39.116726  280880 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:52:39.116743  280880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:52:39.127277  280880 main.go:141] libmachine: Making call to close driver server
	I0407 13:52:39.127301  280880 main.go:141] libmachine: (test-preload-673837) Calling .Close
	I0407 13:52:39.127539  280880 main.go:141] libmachine: Successfully made call to close driver server
	I0407 13:52:39.127558  280880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 13:52:39.127578  280880 main.go:141] libmachine: (test-preload-673837) DBG | Closing plugin on server side
	I0407 13:52:39.128766  280880 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0407 13:52:39.129761  280880 addons.go:514] duration metric: took 1.338071557s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0407 13:52:39.984532  280880 node_ready.go:53] node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:41.984828  280880 node_ready.go:53] node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:44.485982  280880 node_ready.go:53] node "test-preload-673837" has status "Ready":"False"
	I0407 13:52:45.484595  280880 node_ready.go:49] node "test-preload-673837" has status "Ready":"True"
	I0407 13:52:45.484623  280880 node_ready.go:38] duration metric: took 7.503484441s for node "test-preload-673837" to be "Ready" ...
	I0407 13:52:45.484632  280880 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:52:45.488131  280880 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-qm55b" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:45.492833  280880 pod_ready.go:93] pod "coredns-6d4b75cb6d-qm55b" in "kube-system" namespace has status "Ready":"True"
	I0407 13:52:45.492874  280880 pod_ready.go:82] duration metric: took 4.712751ms for pod "coredns-6d4b75cb6d-qm55b" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:45.492887  280880 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:45.998679  280880 pod_ready.go:93] pod "etcd-test-preload-673837" in "kube-system" namespace has status "Ready":"True"
	I0407 13:52:45.998711  280880 pod_ready.go:82] duration metric: took 505.815607ms for pod "etcd-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:45.998724  280880 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:46.002145  280880 pod_ready.go:93] pod "kube-apiserver-test-preload-673837" in "kube-system" namespace has status "Ready":"True"
	I0407 13:52:46.002169  280880 pod_ready.go:82] duration metric: took 3.436209ms for pod "kube-apiserver-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:46.002179  280880 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:46.005576  280880 pod_ready.go:93] pod "kube-controller-manager-test-preload-673837" in "kube-system" namespace has status "Ready":"True"
	I0407 13:52:46.005600  280880 pod_ready.go:82] duration metric: took 3.412228ms for pod "kube-controller-manager-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:46.005612  280880 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nwk94" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:46.285737  280880 pod_ready.go:93] pod "kube-proxy-nwk94" in "kube-system" namespace has status "Ready":"True"
	I0407 13:52:46.285769  280880 pod_ready.go:82] duration metric: took 280.148239ms for pod "kube-proxy-nwk94" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:46.285784  280880 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:46.686006  280880 pod_ready.go:93] pod "kube-scheduler-test-preload-673837" in "kube-system" namespace has status "Ready":"True"
	I0407 13:52:46.686041  280880 pod_ready.go:82] duration metric: took 400.248641ms for pod "kube-scheduler-test-preload-673837" in "kube-system" namespace to be "Ready" ...
	I0407 13:52:46.686054  280880 pod_ready.go:39] duration metric: took 1.201409378s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:52:46.686074  280880 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:52:46.686129  280880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:52:46.702123  280880 api_server.go:72] duration metric: took 8.910447603s to wait for apiserver process to appear ...
	I0407 13:52:46.702157  280880 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:52:46.702180  280880 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0407 13:52:46.707700  280880 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0407 13:52:46.708923  280880 api_server.go:141] control plane version: v1.24.4
	I0407 13:52:46.708948  280880 api_server.go:131] duration metric: took 6.783854ms to wait for apiserver health ...
	I0407 13:52:46.708956  280880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:52:46.886561  280880 system_pods.go:59] 7 kube-system pods found
	I0407 13:52:46.886605  280880 system_pods.go:61] "coredns-6d4b75cb6d-qm55b" [38839847-4bea-458e-bae4-867c92ef468c] Running
	I0407 13:52:46.886615  280880 system_pods.go:61] "etcd-test-preload-673837" [1f0d66df-ff1b-47cc-9b6f-31521ff394d2] Running
	I0407 13:52:46.886622  280880 system_pods.go:61] "kube-apiserver-test-preload-673837" [7dfd6dd8-a700-4b64-a5e8-419b8eab9992] Running
	I0407 13:52:46.886635  280880 system_pods.go:61] "kube-controller-manager-test-preload-673837" [a6fb4571-f685-4afd-be18-23fe80290287] Running
	I0407 13:52:46.886639  280880 system_pods.go:61] "kube-proxy-nwk94" [b343ff23-9775-4dd5-a00c-b34cd0699be8] Running
	I0407 13:52:46.886646  280880 system_pods.go:61] "kube-scheduler-test-preload-673837" [0b5edb74-99e0-40d4-ac56-0707b8d0f9fe] Running
	I0407 13:52:46.886652  280880 system_pods.go:61] "storage-provisioner" [947c51c7-ebc2-4381-8f6a-2404fdeb88c5] Running
	I0407 13:52:46.886662  280880 system_pods.go:74] duration metric: took 177.698917ms to wait for pod list to return data ...
	I0407 13:52:46.886678  280880 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:52:47.085215  280880 default_sa.go:45] found service account: "default"
	I0407 13:52:47.085246  280880 default_sa.go:55] duration metric: took 198.5605ms for default service account to be created ...
	I0407 13:52:47.085258  280880 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:52:47.286929  280880 system_pods.go:86] 7 kube-system pods found
	I0407 13:52:47.286969  280880 system_pods.go:89] "coredns-6d4b75cb6d-qm55b" [38839847-4bea-458e-bae4-867c92ef468c] Running
	I0407 13:52:47.286975  280880 system_pods.go:89] "etcd-test-preload-673837" [1f0d66df-ff1b-47cc-9b6f-31521ff394d2] Running
	I0407 13:52:47.286978  280880 system_pods.go:89] "kube-apiserver-test-preload-673837" [7dfd6dd8-a700-4b64-a5e8-419b8eab9992] Running
	I0407 13:52:47.286982  280880 system_pods.go:89] "kube-controller-manager-test-preload-673837" [a6fb4571-f685-4afd-be18-23fe80290287] Running
	I0407 13:52:47.286985  280880 system_pods.go:89] "kube-proxy-nwk94" [b343ff23-9775-4dd5-a00c-b34cd0699be8] Running
	I0407 13:52:47.286988  280880 system_pods.go:89] "kube-scheduler-test-preload-673837" [0b5edb74-99e0-40d4-ac56-0707b8d0f9fe] Running
	I0407 13:52:47.286993  280880 system_pods.go:89] "storage-provisioner" [947c51c7-ebc2-4381-8f6a-2404fdeb88c5] Running
	I0407 13:52:47.287003  280880 system_pods.go:126] duration metric: took 201.737363ms to wait for k8s-apps to be running ...
	I0407 13:52:47.287013  280880 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:52:47.287071  280880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:52:47.302858  280880 system_svc.go:56] duration metric: took 15.834456ms WaitForService to wait for kubelet
	I0407 13:52:47.302894  280880 kubeadm.go:582] duration metric: took 9.511226877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:52:47.302921  280880 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:52:47.485295  280880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:52:47.485332  280880 node_conditions.go:123] node cpu capacity is 2
	I0407 13:52:47.485355  280880 node_conditions.go:105] duration metric: took 182.419658ms to run NodePressure ...
	I0407 13:52:47.485371  280880 start.go:241] waiting for startup goroutines ...
	I0407 13:52:47.485381  280880 start.go:246] waiting for cluster config update ...
	I0407 13:52:47.485396  280880 start.go:255] writing updated cluster config ...
	I0407 13:52:47.485741  280880 ssh_runner.go:195] Run: rm -f paused
	I0407 13:52:47.534922  280880 start.go:600] kubectl: 1.32.3, cluster: 1.24.4 (minor skew: 8)
	I0407 13:52:47.536721  280880 out.go:201] 
	W0407 13:52:47.538145  280880 out.go:270] ! /usr/local/bin/kubectl is version 1.32.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0407 13:52:47.539345  280880 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0407 13:52:47.540686  280880 out.go:177] * Done! kubectl is now configured to use "test-preload-673837" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.495027079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744033968495002236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa958e8f-7e64-483e-ab60-6dd72bbd7022 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.495681320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=396f1ca5-32ad-4657-b3ed-383d92f15682 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.495734932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=396f1ca5-32ad-4657-b3ed-383d92f15682 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.495944163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e00c561d94377840c8e2b29279b059dd3e8b33c389311e47e9c66a4234c31195,PodSandboxId:14a4627670f464ba1bcb7bce0ad1145c90259939f9548d1b7b531efff197e576,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744033964221069070,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qm55b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38839847-4bea-458e-bae4-867c92ef468c,},Annotations:map[string]string{io.kubernetes.container.hash: a61d1c6a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4f206685b0773ebbcf85ef3e854a9d5a140771982d4fc19f8f98cf3bb54588,PodSandboxId:8183129d709f77a638e9b46f7a36ff7e365db044a80f60eb0013f87a0caae034,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744033957252546319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 947c51c7-ebc2-4381-8f6a-2404fdeb88c5,},Annotations:map[string]string{io.kubernetes.container.hash: fe0be5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60046107574cfb4493d01264abde356b41ed5db30e261c43d908ea9093b3671,PodSandboxId:a0348f061c9df61f8a14aeed28d7ed7c1ed01130f798ed3984ea8aa48969e2e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744033956951849264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nwk94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b34
3ff23-9775-4dd5-a00c-b34cd0699be8,},Annotations:map[string]string{io.kubernetes.container.hash: 60c00eca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e0bd15805c66cf9943d4c93276c8192e1b29b62703bec8ab1c341cbcc328ee,PodSandboxId:421eb7ac8ed8eb28383110d272d6d2e03fd503613a23a20a3b2ec9fa08e03c16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744033950744085105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70cd273d91
ca62cdc01b2b8ee36baeaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3cba74bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b73a6ff490e0f297f6ee70365646ef18fd8195dfd0f62d0468198f857d1ff0ac,PodSandboxId:5b2ceb890f965705abb252dc34f63e7e80fde26955c739154d1b5a9ce94a65d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744033950763737667,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bfc6d043c0994d74a02f7b7884ea086,},Annotations:map[
string]string{io.kubernetes.container.hash: a591a995,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f8643fb6704d869fe1d5bc12aecd387407b2e866d44c3b5fd8c19c55b17b9,PodSandboxId:d961ed1845855e6ddd03b176b2a6cb9f188699c4bb175d25ecaf29d8cca192bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744033950705644361,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28878323513eef44741324ec0cea4939,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f67456934815f20e0ba17faf87b64856c95944693d4a90630ff006ff585639b,PodSandboxId:a0837fa005c3c4dba46cb259a87591dd0cab06eabab2711ae60e472a216f7ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744033950656559092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a19a1fb051569158e4731343d129ae,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=396f1ca5-32ad-4657-b3ed-383d92f15682 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.533430838Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63ecd5b9-7a53-4642-845c-8a74521f2724 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.533594474Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63ecd5b9-7a53-4642-845c-8a74521f2724 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.534974678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f93d78f8-36cf-407d-8ffa-7ea21fcbb37d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.535397367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744033968535377142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f93d78f8-36cf-407d-8ffa-7ea21fcbb37d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.535820198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36e507cf-7abb-4ec2-8568-3032a563e56a name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.535869834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36e507cf-7abb-4ec2-8568-3032a563e56a name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.536083518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e00c561d94377840c8e2b29279b059dd3e8b33c389311e47e9c66a4234c31195,PodSandboxId:14a4627670f464ba1bcb7bce0ad1145c90259939f9548d1b7b531efff197e576,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744033964221069070,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qm55b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38839847-4bea-458e-bae4-867c92ef468c,},Annotations:map[string]string{io.kubernetes.container.hash: a61d1c6a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4f206685b0773ebbcf85ef3e854a9d5a140771982d4fc19f8f98cf3bb54588,PodSandboxId:8183129d709f77a638e9b46f7a36ff7e365db044a80f60eb0013f87a0caae034,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744033957252546319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 947c51c7-ebc2-4381-8f6a-2404fdeb88c5,},Annotations:map[string]string{io.kubernetes.container.hash: fe0be5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60046107574cfb4493d01264abde356b41ed5db30e261c43d908ea9093b3671,PodSandboxId:a0348f061c9df61f8a14aeed28d7ed7c1ed01130f798ed3984ea8aa48969e2e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744033956951849264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nwk94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b34
3ff23-9775-4dd5-a00c-b34cd0699be8,},Annotations:map[string]string{io.kubernetes.container.hash: 60c00eca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e0bd15805c66cf9943d4c93276c8192e1b29b62703bec8ab1c341cbcc328ee,PodSandboxId:421eb7ac8ed8eb28383110d272d6d2e03fd503613a23a20a3b2ec9fa08e03c16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744033950744085105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70cd273d91
ca62cdc01b2b8ee36baeaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3cba74bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b73a6ff490e0f297f6ee70365646ef18fd8195dfd0f62d0468198f857d1ff0ac,PodSandboxId:5b2ceb890f965705abb252dc34f63e7e80fde26955c739154d1b5a9ce94a65d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744033950763737667,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bfc6d043c0994d74a02f7b7884ea086,},Annotations:map[
string]string{io.kubernetes.container.hash: a591a995,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f8643fb6704d869fe1d5bc12aecd387407b2e866d44c3b5fd8c19c55b17b9,PodSandboxId:d961ed1845855e6ddd03b176b2a6cb9f188699c4bb175d25ecaf29d8cca192bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744033950705644361,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28878323513eef44741324ec0cea4939,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f67456934815f20e0ba17faf87b64856c95944693d4a90630ff006ff585639b,PodSandboxId:a0837fa005c3c4dba46cb259a87591dd0cab06eabab2711ae60e472a216f7ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744033950656559092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a19a1fb051569158e4731343d129ae,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36e507cf-7abb-4ec2-8568-3032a563e56a name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.571810991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d25a7155-0cbc-42a5-a777-e8b2dac66555 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.571944819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d25a7155-0cbc-42a5-a777-e8b2dac66555 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.572891198Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d711c471-1e0a-46e8-bf91-091d206a8a5f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.573376356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744033968573354110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d711c471-1e0a-46e8-bf91-091d206a8a5f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.573827284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=669b84c2-f727-43f8-ad5d-c44410fae4be name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.573878625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=669b84c2-f727-43f8-ad5d-c44410fae4be name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.574123842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e00c561d94377840c8e2b29279b059dd3e8b33c389311e47e9c66a4234c31195,PodSandboxId:14a4627670f464ba1bcb7bce0ad1145c90259939f9548d1b7b531efff197e576,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744033964221069070,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qm55b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38839847-4bea-458e-bae4-867c92ef468c,},Annotations:map[string]string{io.kubernetes.container.hash: a61d1c6a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4f206685b0773ebbcf85ef3e854a9d5a140771982d4fc19f8f98cf3bb54588,PodSandboxId:8183129d709f77a638e9b46f7a36ff7e365db044a80f60eb0013f87a0caae034,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744033957252546319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 947c51c7-ebc2-4381-8f6a-2404fdeb88c5,},Annotations:map[string]string{io.kubernetes.container.hash: fe0be5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60046107574cfb4493d01264abde356b41ed5db30e261c43d908ea9093b3671,PodSandboxId:a0348f061c9df61f8a14aeed28d7ed7c1ed01130f798ed3984ea8aa48969e2e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744033956951849264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nwk94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b34
3ff23-9775-4dd5-a00c-b34cd0699be8,},Annotations:map[string]string{io.kubernetes.container.hash: 60c00eca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e0bd15805c66cf9943d4c93276c8192e1b29b62703bec8ab1c341cbcc328ee,PodSandboxId:421eb7ac8ed8eb28383110d272d6d2e03fd503613a23a20a3b2ec9fa08e03c16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744033950744085105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70cd273d91
ca62cdc01b2b8ee36baeaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3cba74bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b73a6ff490e0f297f6ee70365646ef18fd8195dfd0f62d0468198f857d1ff0ac,PodSandboxId:5b2ceb890f965705abb252dc34f63e7e80fde26955c739154d1b5a9ce94a65d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744033950763737667,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bfc6d043c0994d74a02f7b7884ea086,},Annotations:map[
string]string{io.kubernetes.container.hash: a591a995,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f8643fb6704d869fe1d5bc12aecd387407b2e866d44c3b5fd8c19c55b17b9,PodSandboxId:d961ed1845855e6ddd03b176b2a6cb9f188699c4bb175d25ecaf29d8cca192bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744033950705644361,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28878323513eef44741324ec0cea4939,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f67456934815f20e0ba17faf87b64856c95944693d4a90630ff006ff585639b,PodSandboxId:a0837fa005c3c4dba46cb259a87591dd0cab06eabab2711ae60e472a216f7ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744033950656559092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a19a1fb051569158e4731343d129ae,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=669b84c2-f727-43f8-ad5d-c44410fae4be name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.611534754Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0bcadfc-c5e1-482f-a79a-2d7fc3419012 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.611628823Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0bcadfc-c5e1-482f-a79a-2d7fc3419012 name=/runtime.v1.RuntimeService/Version
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.613168934Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebf54c1f-dabc-4cf3-a128-6a66aab6d016 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.613582935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744033968613562111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebf54c1f-dabc-4cf3-a128-6a66aab6d016 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.614020915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bf20c38-120a-446b-a50a-1222d3e60bdd name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.614069162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bf20c38-120a-446b-a50a-1222d3e60bdd name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 13:52:48 test-preload-673837 crio[669]: time="2025-04-07 13:52:48.614275521Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e00c561d94377840c8e2b29279b059dd3e8b33c389311e47e9c66a4234c31195,PodSandboxId:14a4627670f464ba1bcb7bce0ad1145c90259939f9548d1b7b531efff197e576,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744033964221069070,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-qm55b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38839847-4bea-458e-bae4-867c92ef468c,},Annotations:map[string]string{io.kubernetes.container.hash: a61d1c6a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4f206685b0773ebbcf85ef3e854a9d5a140771982d4fc19f8f98cf3bb54588,PodSandboxId:8183129d709f77a638e9b46f7a36ff7e365db044a80f60eb0013f87a0caae034,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744033957252546319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 947c51c7-ebc2-4381-8f6a-2404fdeb88c5,},Annotations:map[string]string{io.kubernetes.container.hash: fe0be5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60046107574cfb4493d01264abde356b41ed5db30e261c43d908ea9093b3671,PodSandboxId:a0348f061c9df61f8a14aeed28d7ed7c1ed01130f798ed3984ea8aa48969e2e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744033956951849264,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nwk94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b34
3ff23-9775-4dd5-a00c-b34cd0699be8,},Annotations:map[string]string{io.kubernetes.container.hash: 60c00eca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e0bd15805c66cf9943d4c93276c8192e1b29b62703bec8ab1c341cbcc328ee,PodSandboxId:421eb7ac8ed8eb28383110d272d6d2e03fd503613a23a20a3b2ec9fa08e03c16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744033950744085105,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70cd273d91
ca62cdc01b2b8ee36baeaf,},Annotations:map[string]string{io.kubernetes.container.hash: 3cba74bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b73a6ff490e0f297f6ee70365646ef18fd8195dfd0f62d0468198f857d1ff0ac,PodSandboxId:5b2ceb890f965705abb252dc34f63e7e80fde26955c739154d1b5a9ce94a65d5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744033950763737667,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bfc6d043c0994d74a02f7b7884ea086,},Annotations:map[
string]string{io.kubernetes.container.hash: a591a995,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f8643fb6704d869fe1d5bc12aecd387407b2e866d44c3b5fd8c19c55b17b9,PodSandboxId:d961ed1845855e6ddd03b176b2a6cb9f188699c4bb175d25ecaf29d8cca192bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744033950705644361,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28878323513eef44741324ec0cea4939,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f67456934815f20e0ba17faf87b64856c95944693d4a90630ff006ff585639b,PodSandboxId:a0837fa005c3c4dba46cb259a87591dd0cab06eabab2711ae60e472a216f7ed2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744033950656559092,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-673837,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a19a1fb051569158e4731343d129ae,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bf20c38-120a-446b-a50a-1222d3e60bdd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e00c561d94377       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   4 seconds ago       Running             coredns                   1                   14a4627670f46       coredns-6d4b75cb6d-qm55b
	2b4f206685b07       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Running             storage-provisioner       1                   8183129d709f7       storage-provisioner
	f60046107574c       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   11 seconds ago      Running             kube-proxy                1                   a0348f061c9df       kube-proxy-nwk94
	b73a6ff490e0f       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   17 seconds ago      Running             etcd                      1                   5b2ceb890f965       etcd-test-preload-673837
	28e0bd15805c6       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   17 seconds ago      Running             kube-apiserver            1                   421eb7ac8ed8e       kube-apiserver-test-preload-673837
	a72f8643fb670       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   17 seconds ago      Running             kube-scheduler            1                   d961ed1845855       kube-scheduler-test-preload-673837
	6f67456934815       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   a0837fa005c3c       kube-controller-manager-test-preload-673837
	
	
	==> coredns [e00c561d94377840c8e2b29279b059dd3e8b33c389311e47e9c66a4234c31195] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:38311 - 65098 "HINFO IN 5419928871332417070.5921196715923098007. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.05139152s
	
	
	==> describe nodes <==
	Name:               test-preload-673837
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-673837
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=test-preload-673837
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T13_51_17_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 13:51:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-673837
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:52:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:52:45 +0000   Mon, 07 Apr 2025 13:51:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:52:45 +0000   Mon, 07 Apr 2025 13:51:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:52:45 +0000   Mon, 07 Apr 2025 13:51:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:52:45 +0000   Mon, 07 Apr 2025 13:52:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    test-preload-673837
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e5ca72733377452ab124de33090ca51d
	  System UUID:                e5ca7273-3377-452a-b124-de33090ca51d
	  Boot ID:                    228083f5-5bcc-4a53-b73c-949cb1a041f8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-qm55b                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     78s
	  kube-system                 etcd-test-preload-673837                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         90s
	  kube-system                 kube-apiserver-test-preload-673837             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-test-preload-673837    200m (10%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-nwk94                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-test-preload-673837             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  Starting                 76s                kube-proxy       
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  91s                kubelet          Node test-preload-673837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s                kubelet          Node test-preload-673837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s                kubelet          Node test-preload-673837 status is now: NodeHasSufficientPID
	  Normal  NodeReady                80s                kubelet          Node test-preload-673837 status is now: NodeReady
	  Normal  RegisteredNode           78s                node-controller  Node test-preload-673837 event: Registered Node test-preload-673837 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node test-preload-673837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node test-preload-673837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 18s)  kubelet          Node test-preload-673837 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node test-preload-673837 event: Registered Node test-preload-673837 in Controller
	
	
	==> dmesg <==
	[Apr 7 13:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051859] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040343] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr 7 13:52] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.761384] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.624425] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.936344] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.061780] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063943] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.156668] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.149174] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.287531] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +12.866812] systemd-fstab-generator[985]: Ignoring "noauto" option for root device
	[  +0.064247] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.955437] systemd-fstab-generator[1113]: Ignoring "noauto" option for root device
	[  +6.773157] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.325716] systemd-fstab-generator[1771]: Ignoring "noauto" option for root device
	[  +6.148846] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [b73a6ff490e0f297f6ee70365646ef18fd8195dfd0f62d0468198f857d1ff0ac] <==
	{"level":"info","ts":"2025-04-07T13:52:31.289Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"38b26e584d45e0da","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-07T13:52:31.290Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-07T13:52:31.291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da switched to configuration voters=(4085449137511063770)"}
	{"level":"info","ts":"2025-04-07T13:52:31.291Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"afb1a6a08b4dab74","local-member-id":"38b26e584d45e0da","added-peer-id":"38b26e584d45e0da","added-peer-peer-urls":["https://192.168.39.38:2380"]}
	{"level":"info","ts":"2025-04-07T13:52:31.291Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"afb1a6a08b4dab74","local-member-id":"38b26e584d45e0da","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T13:52:31.291Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T13:52:31.306Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-07T13:52:31.306Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"38b26e584d45e0da","initial-advertise-peer-urls":["https://192.168.39.38:2380"],"listen-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.38:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-07T13:52:31.306Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-07T13:52:31.306Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2025-04-07T13:52:31.306Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2025-04-07T13:52:32.637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-07T13:52:32.637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-07T13:52:32.637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgPreVoteResp from 38b26e584d45e0da at term 2"}
	{"level":"info","ts":"2025-04-07T13:52:32.637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became candidate at term 3"}
	{"level":"info","ts":"2025-04-07T13:52:32.637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgVoteResp from 38b26e584d45e0da at term 3"}
	{"level":"info","ts":"2025-04-07T13:52:32.637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became leader at term 3"}
	{"level":"info","ts":"2025-04-07T13:52:32.637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38b26e584d45e0da elected leader 38b26e584d45e0da at term 3"}
	{"level":"info","ts":"2025-04-07T13:52:32.638Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"38b26e584d45e0da","local-member-attributes":"{Name:test-preload-673837 ClientURLs:[https://192.168.39.38:2379]}","request-path":"/0/members/38b26e584d45e0da/attributes","cluster-id":"afb1a6a08b4dab74","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T13:52:32.638Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T13:52:32.640Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.38:2379"}
	{"level":"info","ts":"2025-04-07T13:52:32.640Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T13:52:32.641Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T13:52:32.641Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T13:52:32.641Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:52:48 up 0 min,  0 users,  load average: 0.82, 0.24, 0.08
	Linux test-preload-673837 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [28e0bd15805c66cf9943d4c93276c8192e1b29b62703bec8ab1c341cbcc328ee] <==
	I0407 13:52:34.941632       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0407 13:52:34.941667       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0407 13:52:34.961619       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0407 13:52:34.961652       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0407 13:52:34.961689       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0407 13:52:34.976489       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0407 13:52:35.051014       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0407 13:52:35.051579       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0407 13:52:35.052398       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0407 13:52:35.063800       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0407 13:52:35.063879       1 cache.go:39] Caches are synced for autoregister controller
	I0407 13:52:35.064087       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0407 13:52:35.065759       1 shared_informer.go:262] Caches are synced for node_authorizer
	E0407 13:52:35.072619       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0407 13:52:35.115075       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 13:52:35.607159       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0407 13:52:35.933235       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0407 13:52:36.683115       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0407 13:52:36.691115       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0407 13:52:36.744666       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0407 13:52:36.758172       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 13:52:36.764669       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0407 13:52:37.359698       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0407 13:52:47.907097       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 13:52:47.959204       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6f67456934815f20e0ba17faf87b64856c95944693d4a90630ff006ff585639b] <==
	I0407 13:52:47.884196       1 disruption.go:371] Sending events to api server.
	I0407 13:52:47.885995       1 shared_informer.go:262] Caches are synced for stateful set
	I0407 13:52:47.888391       1 shared_informer.go:262] Caches are synced for taint
	I0407 13:52:47.889054       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0407 13:52:47.889226       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-673837. Assuming now as a timestamp.
	I0407 13:52:47.889316       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0407 13:52:47.889988       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0407 13:52:47.890323       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0407 13:52:47.890800       1 event.go:294] "Event occurred" object="test-preload-673837" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-673837 event: Registered Node test-preload-673837 in Controller"
	I0407 13:52:47.893975       1 shared_informer.go:262] Caches are synced for PVC protection
	I0407 13:52:47.895055       1 shared_informer.go:262] Caches are synced for persistent volume
	I0407 13:52:47.895152       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0407 13:52:47.914028       1 shared_informer.go:262] Caches are synced for daemon sets
	I0407 13:52:47.925970       1 shared_informer.go:262] Caches are synced for HPA
	I0407 13:52:47.928237       1 shared_informer.go:262] Caches are synced for cronjob
	I0407 13:52:47.930507       1 shared_informer.go:262] Caches are synced for resource quota
	I0407 13:52:47.933814       1 shared_informer.go:262] Caches are synced for ephemeral
	I0407 13:52:47.936197       1 shared_informer.go:262] Caches are synced for job
	I0407 13:52:47.944708       1 shared_informer.go:262] Caches are synced for endpoint
	I0407 13:52:47.945354       1 shared_informer.go:262] Caches are synced for attach detach
	I0407 13:52:47.950624       1 shared_informer.go:262] Caches are synced for resource quota
	I0407 13:52:47.962741       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0407 13:52:48.403555       1 shared_informer.go:262] Caches are synced for garbage collector
	I0407 13:52:48.403572       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0407 13:52:48.414246       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [f60046107574cfb4493d01264abde356b41ed5db30e261c43d908ea9093b3671] <==
	I0407 13:52:37.269999       1 node.go:163] Successfully retrieved node IP: 192.168.39.38
	I0407 13:52:37.270121       1 server_others.go:138] "Detected node IP" address="192.168.39.38"
	I0407 13:52:37.270163       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0407 13:52:37.344676       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0407 13:52:37.344741       1 server_others.go:206] "Using iptables Proxier"
	I0407 13:52:37.345455       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0407 13:52:37.346104       1 server.go:661] "Version info" version="v1.24.4"
	I0407 13:52:37.346139       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 13:52:37.349751       1 config.go:317] "Starting service config controller"
	I0407 13:52:37.350086       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0407 13:52:37.350194       1 config.go:226] "Starting endpoint slice config controller"
	I0407 13:52:37.350217       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0407 13:52:37.351761       1 config.go:444] "Starting node config controller"
	I0407 13:52:37.351795       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0407 13:52:37.450613       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0407 13:52:37.450674       1 shared_informer.go:262] Caches are synced for service config
	I0407 13:52:37.452421       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [a72f8643fb6704d869fe1d5bc12aecd387407b2e866d44c3b5fd8c19c55b17b9] <==
	I0407 13:52:31.992674       1 serving.go:348] Generated self-signed cert in-memory
	W0407 13:52:34.976671       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 13:52:34.977419       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 13:52:34.977522       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 13:52:34.977549       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 13:52:35.025869       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0407 13:52:35.028017       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 13:52:35.035042       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0407 13:52:35.037333       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 13:52:35.037392       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 13:52:35.037968       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0407 13:52:35.142012       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 07 13:52:35 test-preload-673837 kubelet[1120]: E0407 13:52:35.910184    1120 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-test-preload-673837\" already exists" pod="kube-system/etcd-test-preload-673837"
	Apr 07 13:52:35 test-preload-673837 kubelet[1120]: I0407 13:52:35.940594    1120 apiserver.go:52] "Watching apiserver"
	Apr 07 13:52:35 test-preload-673837 kubelet[1120]: I0407 13:52:35.948798    1120 topology_manager.go:200] "Topology Admit Handler"
	Apr 07 13:52:35 test-preload-673837 kubelet[1120]: I0407 13:52:35.948942    1120 topology_manager.go:200] "Topology Admit Handler"
	Apr 07 13:52:35 test-preload-673837 kubelet[1120]: I0407 13:52:35.948988    1120 topology_manager.go:200] "Topology Admit Handler"
	Apr 07 13:52:35 test-preload-673837 kubelet[1120]: E0407 13:52:35.951816    1120 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-qm55b" podUID=38839847-4bea-458e-bae4-867c92ef468c
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: I0407 13:52:36.023472    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rjlm\" (UniqueName: \"kubernetes.io/projected/b343ff23-9775-4dd5-a00c-b34cd0699be8-kube-api-access-2rjlm\") pod \"kube-proxy-nwk94\" (UID: \"b343ff23-9775-4dd5-a00c-b34cd0699be8\") " pod="kube-system/kube-proxy-nwk94"
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: I0407 13:52:36.023540    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5tcm\" (UniqueName: \"kubernetes.io/projected/947c51c7-ebc2-4381-8f6a-2404fdeb88c5-kube-api-access-z5tcm\") pod \"storage-provisioner\" (UID: \"947c51c7-ebc2-4381-8f6a-2404fdeb88c5\") " pod="kube-system/storage-provisioner"
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: I0407 13:52:36.023570    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38839847-4bea-458e-bae4-867c92ef468c-config-volume\") pod \"coredns-6d4b75cb6d-qm55b\" (UID: \"38839847-4bea-458e-bae4-867c92ef468c\") " pod="kube-system/coredns-6d4b75cb6d-qm55b"
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: I0407 13:52:36.023593    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b343ff23-9775-4dd5-a00c-b34cd0699be8-xtables-lock\") pod \"kube-proxy-nwk94\" (UID: \"b343ff23-9775-4dd5-a00c-b34cd0699be8\") " pod="kube-system/kube-proxy-nwk94"
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: I0407 13:52:36.023610    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b343ff23-9775-4dd5-a00c-b34cd0699be8-lib-modules\") pod \"kube-proxy-nwk94\" (UID: \"b343ff23-9775-4dd5-a00c-b34cd0699be8\") " pod="kube-system/kube-proxy-nwk94"
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: I0407 13:52:36.023627    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b343ff23-9775-4dd5-a00c-b34cd0699be8-kube-proxy\") pod \"kube-proxy-nwk94\" (UID: \"b343ff23-9775-4dd5-a00c-b34cd0699be8\") " pod="kube-system/kube-proxy-nwk94"
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: I0407 13:52:36.023644    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/947c51c7-ebc2-4381-8f6a-2404fdeb88c5-tmp\") pod \"storage-provisioner\" (UID: \"947c51c7-ebc2-4381-8f6a-2404fdeb88c5\") " pod="kube-system/storage-provisioner"
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: I0407 13:52:36.023663    1120 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j8pz\" (UniqueName: \"kubernetes.io/projected/38839847-4bea-458e-bae4-867c92ef468c-kube-api-access-9j8pz\") pod \"coredns-6d4b75cb6d-qm55b\" (UID: \"38839847-4bea-458e-bae4-867c92ef468c\") " pod="kube-system/coredns-6d4b75cb6d-qm55b"
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: I0407 13:52:36.023678    1120 reconciler.go:159] "Reconciler: start to sync state"
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: I0407 13:52:36.065243    1120 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=49d2ba83-d47c-43c3-bc51-f1c357412f32 path="/var/lib/kubelet/pods/49d2ba83-d47c-43c3-bc51-f1c357412f32/volumes"
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: E0407 13:52:36.128093    1120 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: E0407 13:52:36.128199    1120 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/38839847-4bea-458e-bae4-867c92ef468c-config-volume podName:38839847-4bea-458e-bae4-867c92ef468c nodeName:}" failed. No retries permitted until 2025-04-07 13:52:36.62816781 +0000 UTC m=+6.800814445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/38839847-4bea-458e-bae4-867c92ef468c-config-volume") pod "coredns-6d4b75cb6d-qm55b" (UID: "38839847-4bea-458e-bae4-867c92ef468c") : object "kube-system"/"coredns" not registered
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: E0407 13:52:36.631266    1120 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 07 13:52:36 test-preload-673837 kubelet[1120]: E0407 13:52:36.631329    1120 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/38839847-4bea-458e-bae4-867c92ef468c-config-volume podName:38839847-4bea-458e-bae4-867c92ef468c nodeName:}" failed. No retries permitted until 2025-04-07 13:52:37.63131534 +0000 UTC m=+7.803961957 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/38839847-4bea-458e-bae4-867c92ef468c-config-volume") pod "coredns-6d4b75cb6d-qm55b" (UID: "38839847-4bea-458e-bae4-867c92ef468c") : object "kube-system"/"coredns" not registered
	Apr 07 13:52:37 test-preload-673837 kubelet[1120]: E0407 13:52:37.640001    1120 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 07 13:52:37 test-preload-673837 kubelet[1120]: E0407 13:52:37.640147    1120 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/38839847-4bea-458e-bae4-867c92ef468c-config-volume podName:38839847-4bea-458e-bae4-867c92ef468c nodeName:}" failed. No retries permitted until 2025-04-07 13:52:39.640113551 +0000 UTC m=+9.812760189 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/38839847-4bea-458e-bae4-867c92ef468c-config-volume") pod "coredns-6d4b75cb6d-qm55b" (UID: "38839847-4bea-458e-bae4-867c92ef468c") : object "kube-system"/"coredns" not registered
	Apr 07 13:52:38 test-preload-673837 kubelet[1120]: E0407 13:52:38.058307    1120 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-qm55b" podUID=38839847-4bea-458e-bae4-867c92ef468c
	Apr 07 13:52:39 test-preload-673837 kubelet[1120]: E0407 13:52:39.657192    1120 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 07 13:52:39 test-preload-673837 kubelet[1120]: E0407 13:52:39.657662    1120 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/38839847-4bea-458e-bae4-867c92ef468c-config-volume podName:38839847-4bea-458e-bae4-867c92ef468c nodeName:}" failed. No retries permitted until 2025-04-07 13:52:43.657640096 +0000 UTC m=+13.830286715 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/38839847-4bea-458e-bae4-867c92ef468c-config-volume") pod "coredns-6d4b75cb6d-qm55b" (UID: "38839847-4bea-458e-bae4-867c92ef468c") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [2b4f206685b0773ebbcf85ef3e854a9d5a140771982d4fc19f8f98cf3bb54588] <==
	I0407 13:52:37.352835       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-673837 -n test-preload-673837
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-673837 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-673837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-673837
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-673837: (1.163951287s)
--- FAIL: TestPreload (168.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (392.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-222032 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0407 13:56:00.586180  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-222032 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m30.932741545s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-222032] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-222032" primary control-plane node in "kubernetes-upgrade-222032" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:55:33.226372  285135 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:55:33.226485  285135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:55:33.226492  285135 out.go:358] Setting ErrFile to fd 2...
	I0407 13:55:33.226502  285135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:55:33.226687  285135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 13:55:33.227309  285135 out.go:352] Setting JSON to false
	I0407 13:55:33.228245  285135 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":20280,"bootTime":1744013853,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:55:33.228350  285135 start.go:139] virtualization: kvm guest
	I0407 13:55:33.230481  285135 out.go:177] * [kubernetes-upgrade-222032] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:55:33.231735  285135 notify.go:220] Checking for updates...
	I0407 13:55:33.231790  285135 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:55:33.233100  285135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:55:33.234451  285135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 13:55:33.235814  285135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 13:55:33.237159  285135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:55:33.238453  285135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:55:33.240000  285135 config.go:182] Loaded profile config "NoKubernetes-812476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:55:33.240114  285135 config.go:182] Loaded profile config "offline-crio-793502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:55:33.240190  285135 config.go:182] Loaded profile config "running-upgrade-017658": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0407 13:55:33.240267  285135 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:55:33.275645  285135 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 13:55:33.276926  285135 start.go:297] selected driver: kvm2
	I0407 13:55:33.276942  285135 start.go:901] validating driver "kvm2" against <nil>
	I0407 13:55:33.276952  285135 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:55:33.277662  285135 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:55:33.277778  285135 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 13:55:33.293025  285135 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 13:55:33.293079  285135 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 13:55:33.293328  285135 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 13:55:33.293358  285135 cni.go:84] Creating CNI manager for ""
	I0407 13:55:33.293402  285135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:55:33.293410  285135 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 13:55:33.293452  285135 start.go:340] cluster config:
	{Name:kubernetes-upgrade-222032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-222032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:55:33.293546  285135 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:55:33.296031  285135 out.go:177] * Starting "kubernetes-upgrade-222032" primary control-plane node in "kubernetes-upgrade-222032" cluster
	I0407 13:55:33.297341  285135 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 13:55:33.297383  285135 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0407 13:55:33.297390  285135 cache.go:56] Caching tarball of preloaded images
	I0407 13:55:33.297467  285135 preload.go:172] Found /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 13:55:33.297478  285135 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0407 13:55:33.297559  285135 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/config.json ...
	I0407 13:55:33.297577  285135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/config.json: {Name:mk92440a6ff6d2fa7237030cafc1eb8cec3d0d5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:55:33.297699  285135 start.go:360] acquireMachinesLock for kubernetes-upgrade-222032: {Name:mkbc0d9211b04d7c322a45485d144adcd6ee59fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:56:32.792990  285135 start.go:364] duration metric: took 59.495258301s to acquireMachinesLock for "kubernetes-upgrade-222032"
	I0407 13:56:32.793102  285135 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-222032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-222032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 13:56:32.793251  285135 start.go:125] createHost starting for "" (driver="kvm2")
	I0407 13:56:32.795677  285135 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 13:56:32.795902  285135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:56:32.795966  285135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:56:32.812642  285135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0407 13:56:32.813160  285135 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:56:32.813803  285135 main.go:141] libmachine: Using API Version  1
	I0407 13:56:32.813832  285135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:56:32.814206  285135 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:56:32.814376  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetMachineName
	I0407 13:56:32.814499  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 13:56:32.814630  285135 start.go:159] libmachine.API.Create for "kubernetes-upgrade-222032" (driver="kvm2")
	I0407 13:56:32.814661  285135 client.go:168] LocalClient.Create starting
	I0407 13:56:32.814697  285135 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem
	I0407 13:56:32.814741  285135 main.go:141] libmachine: Decoding PEM data...
	I0407 13:56:32.814772  285135 main.go:141] libmachine: Parsing certificate...
	I0407 13:56:32.814849  285135 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem
	I0407 13:56:32.814886  285135 main.go:141] libmachine: Decoding PEM data...
	I0407 13:56:32.814902  285135 main.go:141] libmachine: Parsing certificate...
	I0407 13:56:32.814949  285135 main.go:141] libmachine: Running pre-create checks...
	I0407 13:56:32.814966  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .PreCreateCheck
	I0407 13:56:32.815369  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetConfigRaw
	I0407 13:56:32.815851  285135 main.go:141] libmachine: Creating machine...
	I0407 13:56:32.815885  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .Create
	I0407 13:56:32.816084  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) creating KVM machine...
	I0407 13:56:32.816108  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) creating network...
	I0407 13:56:32.817333  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found existing default KVM network
	I0407 13:56:32.818257  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:32.818071  285972 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:de:42} reservation:<nil>}
	I0407 13:56:32.819072  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:32.818987  285972 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002018d0}
	I0407 13:56:32.819139  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | created network xml: 
	I0407 13:56:32.819167  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | <network>
	I0407 13:56:32.819184  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG |   <name>mk-kubernetes-upgrade-222032</name>
	I0407 13:56:32.819198  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG |   <dns enable='no'/>
	I0407 13:56:32.819209  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG |   
	I0407 13:56:32.819234  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0407 13:56:32.819249  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG |     <dhcp>
	I0407 13:56:32.819269  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0407 13:56:32.819281  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG |     </dhcp>
	I0407 13:56:32.819291  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG |   </ip>
	I0407 13:56:32.819299  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG |   
	I0407 13:56:32.819313  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | </network>
	I0407 13:56:32.819324  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | 
	I0407 13:56:32.824304  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | trying to create private KVM network mk-kubernetes-upgrade-222032 192.168.50.0/24...
	I0407 13:56:32.894187  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | private KVM network mk-kubernetes-upgrade-222032 192.168.50.0/24 created
	I0407 13:56:32.894223  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) setting up store path in /home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032 ...
	I0407 13:56:32.894237  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:32.894146  285972 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 13:56:32.894248  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) building disk image from file:///home/jenkins/minikube-integration/20598-242355/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 13:56:32.894270  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Downloading /home/jenkins/minikube-integration/20598-242355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20598-242355/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 13:56:33.154073  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:33.153881  285972 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/id_rsa...
	I0407 13:56:33.254287  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:33.254113  285972 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/kubernetes-upgrade-222032.rawdisk...
	I0407 13:56:33.254330  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | Writing magic tar header
	I0407 13:56:33.254352  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | Writing SSH key tar header
	I0407 13:56:33.254366  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:33.254289  285972 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032 ...
	I0407 13:56:33.254480  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032
	I0407 13:56:33.254516  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube/machines
	I0407 13:56:33.254528  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032 (perms=drwx------)
	I0407 13:56:33.254540  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube/machines (perms=drwxr-xr-x)
	I0407 13:56:33.254551  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 13:56:33.254558  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube (perms=drwxr-xr-x)
	I0407 13:56:33.254574  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) setting executable bit set on /home/jenkins/minikube-integration/20598-242355 (perms=drwxrwxr-x)
	I0407 13:56:33.254587  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0407 13:56:33.254610  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355
	I0407 13:56:33.254620  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0407 13:56:33.254629  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) creating domain...
	I0407 13:56:33.254639  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0407 13:56:33.254647  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | checking permissions on dir: /home/jenkins
	I0407 13:56:33.254661  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | checking permissions on dir: /home
	I0407 13:56:33.254672  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | skipping /home - not owner
	I0407 13:56:33.255799  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) define libvirt domain using xml: 
	I0407 13:56:33.255831  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) <domain type='kvm'>
	I0407 13:56:33.255840  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)   <name>kubernetes-upgrade-222032</name>
	I0407 13:56:33.255845  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)   <memory unit='MiB'>2200</memory>
	I0407 13:56:33.255850  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)   <vcpu>2</vcpu>
	I0407 13:56:33.255854  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)   <features>
	I0407 13:56:33.255859  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <acpi/>
	I0407 13:56:33.255865  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <apic/>
	I0407 13:56:33.255874  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <pae/>
	I0407 13:56:33.255878  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     
	I0407 13:56:33.255887  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)   </features>
	I0407 13:56:33.255892  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)   <cpu mode='host-passthrough'>
	I0407 13:56:33.255926  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)   
	I0407 13:56:33.255950  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)   </cpu>
	I0407 13:56:33.255961  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)   <os>
	I0407 13:56:33.255972  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <type>hvm</type>
	I0407 13:56:33.255986  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <boot dev='cdrom'/>
	I0407 13:56:33.255997  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <boot dev='hd'/>
	I0407 13:56:33.256010  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <bootmenu enable='no'/>
	I0407 13:56:33.256025  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)   </os>
	I0407 13:56:33.256037  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)   <devices>
	I0407 13:56:33.256050  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <disk type='file' device='cdrom'>
	I0407 13:56:33.256080  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <source file='/home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/boot2docker.iso'/>
	I0407 13:56:33.256092  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <target dev='hdc' bus='scsi'/>
	I0407 13:56:33.256116  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <readonly/>
	I0407 13:56:33.256141  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     </disk>
	I0407 13:56:33.256153  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <disk type='file' device='disk'>
	I0407 13:56:33.256166  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0407 13:56:33.256193  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <source file='/home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/kubernetes-upgrade-222032.rawdisk'/>
	I0407 13:56:33.256205  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <target dev='hda' bus='virtio'/>
	I0407 13:56:33.256216  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     </disk>
	I0407 13:56:33.256231  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <interface type='network'>
	I0407 13:56:33.256243  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <source network='mk-kubernetes-upgrade-222032'/>
	I0407 13:56:33.256254  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <model type='virtio'/>
	I0407 13:56:33.256265  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     </interface>
	I0407 13:56:33.256276  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <interface type='network'>
	I0407 13:56:33.256288  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <source network='default'/>
	I0407 13:56:33.256303  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <model type='virtio'/>
	I0407 13:56:33.256314  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     </interface>
	I0407 13:56:33.256325  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <serial type='pty'>
	I0407 13:56:33.256336  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <target port='0'/>
	I0407 13:56:33.256345  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     </serial>
	I0407 13:56:33.256358  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <console type='pty'>
	I0407 13:56:33.256371  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <target type='serial' port='0'/>
	I0407 13:56:33.256382  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     </console>
	I0407 13:56:33.256396  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     <rng model='virtio'>
	I0407 13:56:33.256409  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)       <backend model='random'>/dev/random</backend>
	I0407 13:56:33.256418  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     </rng>
	I0407 13:56:33.256439  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     
	I0407 13:56:33.256450  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)     
	I0407 13:56:33.256469  285135 main.go:141] libmachine: (kubernetes-upgrade-222032)   </devices>
	I0407 13:56:33.256482  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) </domain>
	I0407 13:56:33.256496  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) 
	I0407 13:56:33.260608  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:bf:be:68 in network default
	I0407 13:56:33.261183  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) starting domain...
	I0407 13:56:33.261209  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:33.261219  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) ensuring networks are active...
	I0407 13:56:33.261902  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Ensuring network default is active
	I0407 13:56:33.262222  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Ensuring network mk-kubernetes-upgrade-222032 is active
	I0407 13:56:33.262979  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) getting domain XML...
	I0407 13:56:33.263810  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) creating domain...
	I0407 13:56:34.488281  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) waiting for IP...
	I0407 13:56:34.489211  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:34.489593  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:34.489656  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:34.489592  285972 retry.go:31] will retry after 303.164896ms: waiting for domain to come up
	I0407 13:56:34.794177  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:34.794771  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:34.794815  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:34.794732  285972 retry.go:31] will retry after 255.381903ms: waiting for domain to come up
	I0407 13:56:35.052742  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:35.053344  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:35.053407  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:35.053327  285972 retry.go:31] will retry after 301.187955ms: waiting for domain to come up
	I0407 13:56:35.355751  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:35.356154  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:35.356195  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:35.356118  285972 retry.go:31] will retry after 476.800466ms: waiting for domain to come up
	I0407 13:56:35.834768  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:35.835341  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:35.835368  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:35.835300  285972 retry.go:31] will retry after 700.167949ms: waiting for domain to come up
	I0407 13:56:36.537160  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:36.537854  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:36.537901  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:36.537781  285972 retry.go:31] will retry after 625.82493ms: waiting for domain to come up
	I0407 13:56:37.165741  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:37.166348  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:37.166401  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:37.166312  285972 retry.go:31] will retry after 788.642501ms: waiting for domain to come up
	I0407 13:56:37.956639  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:37.957178  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:37.957212  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:37.957154  285972 retry.go:31] will retry after 1.179001973s: waiting for domain to come up
	I0407 13:56:39.137654  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:39.138229  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:39.138259  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:39.138190  285972 retry.go:31] will retry after 1.694068974s: waiting for domain to come up
	I0407 13:56:40.835248  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:40.835756  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:40.835786  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:40.835733  285972 retry.go:31] will retry after 1.443140909s: waiting for domain to come up
	I0407 13:56:42.280688  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:42.281205  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:42.281228  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:42.281166  285972 retry.go:31] will retry after 1.759483997s: waiting for domain to come up
	I0407 13:56:44.041853  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:44.042257  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:44.042300  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:44.042242  285972 retry.go:31] will retry after 3.1022279s: waiting for domain to come up
	I0407 13:56:47.145743  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:47.146192  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:47.146224  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:47.146153  285972 retry.go:31] will retry after 4.192277798s: waiting for domain to come up
	I0407 13:56:51.342959  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:51.343359  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find current IP address of domain kubernetes-upgrade-222032 in network mk-kubernetes-upgrade-222032
	I0407 13:56:51.343389  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | I0407 13:56:51.343339  285972 retry.go:31] will retry after 3.856696075s: waiting for domain to come up
	I0407 13:56:55.201654  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.202095  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) found domain IP: 192.168.50.198
	I0407 13:56:55.202126  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has current primary IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.202137  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) reserving static IP address...
	I0407 13:56:55.202495  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-222032", mac: "52:54:00:17:55:58", ip: "192.168.50.198"} in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.282329  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) reserved static IP address 192.168.50.198 for domain kubernetes-upgrade-222032
	I0407 13:56:55.282361  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | Getting to WaitForSSH function...
	I0407 13:56:55.282368  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) waiting for SSH...
	I0407 13:56:55.284891  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.285324  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:minikube Clientid:01:52:54:00:17:55:58}
	I0407 13:56:55.285357  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.285474  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | Using SSH client type: external
	I0407 13:56:55.285515  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | Using SSH private key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/id_rsa (-rw-------)
	I0407 13:56:55.285572  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 13:56:55.285591  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | About to run SSH command:
	I0407 13:56:55.285624  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | exit 0
	I0407 13:56:55.416687  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | SSH cmd err, output: <nil>: 
	I0407 13:56:55.417029  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) KVM machine creation complete
	I0407 13:56:55.417341  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetConfigRaw
	I0407 13:56:55.418055  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 13:56:55.418281  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 13:56:55.418434  285135 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0407 13:56:55.418448  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetState
	I0407 13:56:55.419819  285135 main.go:141] libmachine: Detecting operating system of created instance...
	I0407 13:56:55.419837  285135 main.go:141] libmachine: Waiting for SSH to be available...
	I0407 13:56:55.419842  285135 main.go:141] libmachine: Getting to WaitForSSH function...
	I0407 13:56:55.419848  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 13:56:55.422198  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.422689  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:55.422726  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.422890  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 13:56:55.423082  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:55.423288  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:55.423427  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 13:56:55.423594  285135 main.go:141] libmachine: Using SSH client type: native
	I0407 13:56:55.423840  285135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0407 13:56:55.423850  285135 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0407 13:56:55.535960  285135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:56:55.535992  285135 main.go:141] libmachine: Detecting the provisioner...
	I0407 13:56:55.536003  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 13:56:55.538711  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.539076  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:55.539109  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.539315  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 13:56:55.539535  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:55.539750  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:55.539884  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 13:56:55.540096  285135 main.go:141] libmachine: Using SSH client type: native
	I0407 13:56:55.540296  285135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0407 13:56:55.540307  285135 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0407 13:56:55.657711  285135 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0407 13:56:55.657805  285135 main.go:141] libmachine: found compatible host: buildroot
	I0407 13:56:55.657815  285135 main.go:141] libmachine: Provisioning with buildroot...
	I0407 13:56:55.657823  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetMachineName
	I0407 13:56:55.658087  285135 buildroot.go:166] provisioning hostname "kubernetes-upgrade-222032"
	I0407 13:56:55.658119  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetMachineName
	I0407 13:56:55.658351  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 13:56:55.661320  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.661735  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:55.661772  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.661928  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 13:56:55.662156  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:55.662349  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:55.662515  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 13:56:55.662733  285135 main.go:141] libmachine: Using SSH client type: native
	I0407 13:56:55.663009  285135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0407 13:56:55.663027  285135 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-222032 && echo "kubernetes-upgrade-222032" | sudo tee /etc/hostname
	I0407 13:56:55.795704  285135 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-222032
	
	I0407 13:56:55.795745  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 13:56:55.798963  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.799508  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:55.799544  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.799735  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 13:56:55.799940  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:55.800093  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:55.800276  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 13:56:55.800456  285135 main.go:141] libmachine: Using SSH client type: native
	I0407 13:56:55.800657  285135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0407 13:56:55.800673  285135 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-222032' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-222032/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-222032' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:56:55.935102  285135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:56:55.935141  285135 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20598-242355/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-242355/.minikube}
	I0407 13:56:55.935198  285135 buildroot.go:174] setting up certificates
	I0407 13:56:55.935217  285135 provision.go:84] configureAuth start
	I0407 13:56:55.935238  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetMachineName
	I0407 13:56:55.935564  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetIP
	I0407 13:56:55.938066  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.938527  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:55.938559  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.938821  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 13:56:55.941967  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.942383  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:55.942443  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:55.942580  285135 provision.go:143] copyHostCerts
	I0407 13:56:55.942661  285135 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem, removing ...
	I0407 13:56:55.942685  285135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem
	I0407 13:56:55.942749  285135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem (1078 bytes)
	I0407 13:56:55.942887  285135 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem, removing ...
	I0407 13:56:55.942900  285135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem
	I0407 13:56:55.942931  285135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem (1123 bytes)
	I0407 13:56:55.943040  285135 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem, removing ...
	I0407 13:56:55.943052  285135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem
	I0407 13:56:55.943081  285135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem (1679 bytes)
	I0407 13:56:55.943166  285135 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-222032 san=[127.0.0.1 192.168.50.198 kubernetes-upgrade-222032 localhost minikube]
	I0407 13:56:56.260179  285135 provision.go:177] copyRemoteCerts
	I0407 13:56:56.260247  285135 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:56:56.260274  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 13:56:56.263850  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.264329  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:56.264361  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.264591  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 13:56:56.264888  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:56.265102  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 13:56:56.265254  285135 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/id_rsa Username:docker}
	I0407 13:56:56.350555  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:56:56.375838  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0407 13:56:56.401829  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 13:56:56.426906  285135 provision.go:87] duration metric: took 491.673695ms to configureAuth
	I0407 13:56:56.426940  285135 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:56:56.427192  285135 config.go:182] Loaded profile config "kubernetes-upgrade-222032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 13:56:56.427291  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 13:56:56.430195  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.430593  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:56.430630  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.430759  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 13:56:56.430957  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:56.431141  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:56.431315  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 13:56:56.431483  285135 main.go:141] libmachine: Using SSH client type: native
	I0407 13:56:56.431731  285135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0407 13:56:56.431755  285135 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 13:56:56.668129  285135 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 13:56:56.668244  285135 main.go:141] libmachine: Checking connection to Docker...
	I0407 13:56:56.668282  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetURL
	I0407 13:56:56.669481  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | using libvirt version 6000000
	I0407 13:56:56.671759  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.672113  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:56.672166  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.672284  285135 main.go:141] libmachine: Docker is up and running!
	I0407 13:56:56.672297  285135 main.go:141] libmachine: Reticulating splines...
	I0407 13:56:56.672307  285135 client.go:171] duration metric: took 23.857633717s to LocalClient.Create
	I0407 13:56:56.672337  285135 start.go:167] duration metric: took 23.85770842s to libmachine.API.Create "kubernetes-upgrade-222032"
	I0407 13:56:56.672347  285135 start.go:293] postStartSetup for "kubernetes-upgrade-222032" (driver="kvm2")
	I0407 13:56:56.672356  285135 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:56:56.672374  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 13:56:56.672660  285135 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:56:56.672693  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 13:56:56.674995  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.675281  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:56.675310  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.675466  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 13:56:56.675652  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:56.675798  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 13:56:56.675924  285135 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/id_rsa Username:docker}
	I0407 13:56:56.768836  285135 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:56:56.775316  285135 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:56:56.775350  285135 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/addons for local assets ...
	I0407 13:56:56.775436  285135 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/files for local assets ...
	I0407 13:56:56.775540  285135 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem -> 2495162.pem in /etc/ssl/certs
	I0407 13:56:56.775673  285135 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:56:56.789566  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 13:56:56.818085  285135 start.go:296] duration metric: took 145.720081ms for postStartSetup
	I0407 13:56:56.818177  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetConfigRaw
	I0407 13:56:56.818858  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetIP
	I0407 13:56:56.821366  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.821778  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:56.821807  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.822101  285135 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/config.json ...
	I0407 13:56:56.822365  285135 start.go:128] duration metric: took 24.029094959s to createHost
	I0407 13:56:56.822419  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 13:56:56.824823  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.825190  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:56.825222  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.825387  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 13:56:56.825600  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:56.825759  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:56.825903  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 13:56:56.826069  285135 main.go:141] libmachine: Using SSH client type: native
	I0407 13:56:56.826284  285135 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0407 13:56:56.826296  285135 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:56:56.942363  285135 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744034216.900104884
	
	I0407 13:56:56.942386  285135 fix.go:216] guest clock: 1744034216.900104884
	I0407 13:56:56.942394  285135 fix.go:229] Guest: 2025-04-07 13:56:56.900104884 +0000 UTC Remote: 2025-04-07 13:56:56.822382034 +0000 UTC m=+83.633076110 (delta=77.72285ms)
	I0407 13:56:56.942432  285135 fix.go:200] guest clock delta is within tolerance: 77.72285ms
	I0407 13:56:56.942437  285135 start.go:83] releasing machines lock for "kubernetes-upgrade-222032", held for 24.149391694s
	I0407 13:56:56.942463  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 13:56:56.942825  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetIP
	I0407 13:56:56.945851  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.946285  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:56.946317  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.946515  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 13:56:56.947113  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 13:56:56.947322  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 13:56:56.947425  285135 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 13:56:56.947472  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 13:56:56.947599  285135 ssh_runner.go:195] Run: cat /version.json
	I0407 13:56:56.947632  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 13:56:56.950673  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.950702  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.951050  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:56.951095  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.951143  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:56.951161  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:56.951221  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 13:56:56.951351  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 13:56:56.951459  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:56.951486  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 13:56:56.951600  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 13:56:56.951664  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 13:56:56.951811  285135 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/id_rsa Username:docker}
	I0407 13:56:56.951876  285135 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/id_rsa Username:docker}
	I0407 13:56:57.072304  285135 ssh_runner.go:195] Run: systemctl --version
	I0407 13:56:57.080212  285135 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 13:56:57.241714  285135 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 13:56:57.249901  285135 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:56:57.249992  285135 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:56:57.268879  285135 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:56:57.268902  285135 start.go:495] detecting cgroup driver to use...
	I0407 13:56:57.268981  285135 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:56:57.287115  285135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:56:57.300996  285135 docker.go:217] disabling cri-docker service (if available) ...
	I0407 13:56:57.301058  285135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 13:56:57.316201  285135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 13:56:57.331011  285135 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 13:56:57.467051  285135 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 13:56:57.647560  285135 docker.go:233] disabling docker service ...
	I0407 13:56:57.647643  285135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 13:56:57.665950  285135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 13:56:57.680393  285135 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 13:56:57.860996  285135 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 13:56:58.003716  285135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 13:56:58.019785  285135 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:56:58.040853  285135 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0407 13:56:58.040932  285135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:56:58.051898  285135 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 13:56:58.051975  285135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:56:58.063832  285135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:56:58.075411  285135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 13:56:58.090842  285135 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:56:58.103556  285135 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:56:58.118637  285135 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:56:58.118701  285135 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:56:58.138096  285135 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:56:58.153144  285135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:56:58.331759  285135 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 13:56:58.451612  285135 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 13:56:58.451708  285135 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 13:56:58.457945  285135 start.go:563] Will wait 60s for crictl version
	I0407 13:56:58.458011  285135 ssh_runner.go:195] Run: which crictl
	I0407 13:56:58.463538  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:56:58.535926  285135 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 13:56:58.536031  285135 ssh_runner.go:195] Run: crio --version
	I0407 13:56:58.568758  285135 ssh_runner.go:195] Run: crio --version
	I0407 13:56:58.607497  285135 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0407 13:56:58.608661  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetIP
	I0407 13:56:58.612315  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:58.612844  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 14:56:48 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 13:56:58.612924  285135 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 13:56:58.613313  285135 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0407 13:56:58.618424  285135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:56:58.633118  285135 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-222032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-222032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.198 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:56:58.633273  285135 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 13:56:58.633337  285135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:56:58.673182  285135 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0407 13:56:58.673264  285135 ssh_runner.go:195] Run: which lz4
	I0407 13:56:58.678029  285135 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 13:56:58.682672  285135 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:56:58.682701  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0407 13:57:00.517965  285135 crio.go:462] duration metric: took 1.839973906s to copy over tarball
	I0407 13:57:00.518077  285135 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 13:57:03.402063  285135 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.883940719s)
	I0407 13:57:03.402122  285135 crio.go:469] duration metric: took 2.884111992s to extract the tarball
	I0407 13:57:03.402134  285135 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:57:03.446214  285135 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 13:57:03.502477  285135 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0407 13:57:03.502507  285135 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0407 13:57:03.502595  285135 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:57:03.502657  285135 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0407 13:57:03.502684  285135 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0407 13:57:03.502739  285135 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:57:03.502769  285135 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:57:03.502749  285135 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0407 13:57:03.502598  285135 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:57:03.502969  285135 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:57:03.504578  285135 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0407 13:57:03.504601  285135 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0407 13:57:03.504765  285135 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0407 13:57:03.504767  285135 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:57:03.505272  285135 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:57:03.505275  285135 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:57:03.505287  285135 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:57:03.505275  285135 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:57:03.642938  285135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0407 13:57:03.647147  285135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0407 13:57:03.648967  285135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:57:03.650930  285135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:57:03.662272  285135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:57:03.672471  285135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0407 13:57:03.720361  285135 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0407 13:57:03.720409  285135 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0407 13:57:03.720480  285135 ssh_runner.go:195] Run: which crictl
	I0407 13:57:03.775745  285135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:57:03.808752  285135 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0407 13:57:03.808806  285135 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0407 13:57:03.808860  285135 ssh_runner.go:195] Run: which crictl
	I0407 13:57:03.815820  285135 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0407 13:57:03.815877  285135 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:57:03.815931  285135 ssh_runner.go:195] Run: which crictl
	I0407 13:57:03.820590  285135 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0407 13:57:03.820636  285135 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:57:03.820682  285135 ssh_runner.go:195] Run: which crictl
	I0407 13:57:03.829381  285135 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0407 13:57:03.829478  285135 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:57:03.829513  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 13:57:03.829532  285135 ssh_runner.go:195] Run: which crictl
	I0407 13:57:03.829401  285135 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0407 13:57:03.829602  285135 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0407 13:57:03.829644  285135 ssh_runner.go:195] Run: which crictl
	I0407 13:57:03.868981  285135 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0407 13:57:03.869076  285135 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:57:03.869130  285135 ssh_runner.go:195] Run: which crictl
	I0407 13:57:03.869150  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 13:57:03.869242  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:57:03.869251  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:57:03.869335  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:57:03.902829  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 13:57:03.902898  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 13:57:03.902921  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:57:04.042063  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 13:57:04.042216  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:57:04.042246  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:57:04.042308  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:57:04.042446  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 13:57:04.092957  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:57:04.092969  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 13:57:04.200772  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 13:57:04.200784  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 13:57:04.200873  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 13:57:04.200875  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 13:57:04.214345  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 13:57:04.282774  285135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0407 13:57:04.282896  285135 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 13:57:04.354024  285135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0407 13:57:04.354113  285135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0407 13:57:04.372164  285135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0407 13:57:04.372164  285135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0407 13:57:04.372237  285135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0407 13:57:04.396757  285135 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0407 13:57:05.277176  285135 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:57:05.432343  285135 cache_images.go:92] duration metric: took 1.929813604s to LoadCachedImages
	W0407 13:57:05.432472  285135 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0407 13:57:05.432491  285135 kubeadm.go:934] updating node { 192.168.50.198 8443 v1.20.0 crio true true} ...
	I0407 13:57:05.432605  285135 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-222032 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-222032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:57:05.432694  285135 ssh_runner.go:195] Run: crio config
	I0407 13:57:05.506690  285135 cni.go:84] Creating CNI manager for ""
	I0407 13:57:05.506720  285135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 13:57:05.506733  285135 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:57:05.506753  285135 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.198 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-222032 NodeName:kubernetes-upgrade-222032 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0407 13:57:05.506878  285135 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-222032"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:57:05.506949  285135 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0407 13:57:05.518368  285135 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:57:05.518483  285135 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:57:05.530467  285135 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0407 13:57:05.549882  285135 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:57:05.568802  285135 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0407 13:57:05.587635  285135 ssh_runner.go:195] Run: grep 192.168.50.198	control-plane.minikube.internal$ /etc/hosts
	I0407 13:57:05.592212  285135 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:57:05.609707  285135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:57:05.755189  285135 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:57:05.774748  285135 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032 for IP: 192.168.50.198
	I0407 13:57:05.774776  285135 certs.go:194] generating shared ca certs ...
	I0407 13:57:05.774797  285135 certs.go:226] acquiring lock for ca certs: {Name:mk1da0e2436b5b22d130d00c7c348c272ee34f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:57:05.775009  285135 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key
	I0407 13:57:05.775086  285135 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key
	I0407 13:57:05.775104  285135 certs.go:256] generating profile certs ...
	I0407 13:57:05.775190  285135 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/client.key
	I0407 13:57:05.775211  285135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/client.crt with IP's: []
	I0407 13:57:06.240443  285135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/client.crt ...
	I0407 13:57:06.240485  285135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/client.crt: {Name:mk60297e8764f53e91ffeb982c383d5a764abdd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:57:06.240679  285135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/client.key ...
	I0407 13:57:06.240695  285135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/client.key: {Name:mk08d08214ad87f40f1f01bf0cb36f00c08b03a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:57:06.240803  285135 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.key.4bde82e8
	I0407 13:57:06.240831  285135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.crt.4bde82e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.198]
	I0407 13:57:06.610379  285135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.crt.4bde82e8 ...
	I0407 13:57:06.610411  285135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.crt.4bde82e8: {Name:mk9c8edd4da63266fec10745cb111dd66bc2e73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:57:06.610583  285135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.key.4bde82e8 ...
	I0407 13:57:06.610597  285135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.key.4bde82e8: {Name:mke4ca99ca1c825904da33532e686e19c80aef5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:57:06.610689  285135 certs.go:381] copying /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.crt.4bde82e8 -> /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.crt
	I0407 13:57:06.610772  285135 certs.go:385] copying /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.key.4bde82e8 -> /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.key
	I0407 13:57:06.610867  285135 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/proxy-client.key
	I0407 13:57:06.610884  285135 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/proxy-client.crt with IP's: []
	I0407 13:57:06.725064  285135 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/proxy-client.crt ...
	I0407 13:57:06.725107  285135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/proxy-client.crt: {Name:mk3ca35f31af381622d7cfc070077e6c11ddb783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:57:06.725329  285135 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/proxy-client.key ...
	I0407 13:57:06.725351  285135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/proxy-client.key: {Name:mk6e98cc4ebc780d79fdfe14688f1da1de4c6b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:57:06.725603  285135 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem (1338 bytes)
	W0407 13:57:06.725667  285135 certs.go:480] ignoring /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516_empty.pem, impossibly tiny 0 bytes
	I0407 13:57:06.725681  285135 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 13:57:06.725710  285135 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem (1078 bytes)
	I0407 13:57:06.725746  285135 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem (1123 bytes)
	I0407 13:57:06.725777  285135 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem (1679 bytes)
	I0407 13:57:06.725847  285135 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 13:57:06.726753  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:57:06.770528  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:57:06.800762  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:57:06.831784  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 13:57:06.861135  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0407 13:57:06.889671  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0407 13:57:06.919793  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:57:06.948375  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:57:06.977603  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:57:07.004643  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem --> /usr/share/ca-certificates/249516.pem (1338 bytes)
	I0407 13:57:07.032143  285135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /usr/share/ca-certificates/2495162.pem (1708 bytes)
	I0407 13:57:07.064350  285135 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:57:07.086988  285135 ssh_runner.go:195] Run: openssl version
	I0407 13:57:07.094032  285135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/249516.pem && ln -fs /usr/share/ca-certificates/249516.pem /etc/ssl/certs/249516.pem"
	I0407 13:57:07.106702  285135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/249516.pem
	I0407 13:57:07.112016  285135 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 13:03 /usr/share/ca-certificates/249516.pem
	I0407 13:57:07.112083  285135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/249516.pem
	I0407 13:57:07.119035  285135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/249516.pem /etc/ssl/certs/51391683.0"
	I0407 13:57:07.135113  285135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2495162.pem && ln -fs /usr/share/ca-certificates/2495162.pem /etc/ssl/certs/2495162.pem"
	I0407 13:57:07.151670  285135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2495162.pem
	I0407 13:57:07.157417  285135 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 13:03 /usr/share/ca-certificates/2495162.pem
	I0407 13:57:07.157500  285135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2495162.pem
	I0407 13:57:07.164655  285135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2495162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:57:07.178402  285135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:57:07.191940  285135 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:57:07.197336  285135 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:57:07.197416  285135 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:57:07.203499  285135 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:57:07.217428  285135 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:57:07.222423  285135 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:57:07.222491  285135 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-222032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-222032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.198 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:57:07.222583  285135 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 13:57:07.222671  285135 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 13:57:07.265432  285135 cri.go:89] found id: ""
	I0407 13:57:07.265535  285135 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:57:07.276534  285135 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:57:07.287219  285135 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:57:07.299567  285135 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:57:07.299592  285135 kubeadm.go:157] found existing configuration files:
	
	I0407 13:57:07.299653  285135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:57:07.311093  285135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:57:07.311205  285135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:57:07.323024  285135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:57:07.333366  285135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:57:07.333438  285135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:57:07.347496  285135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:57:07.358739  285135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:57:07.358807  285135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:57:07.370895  285135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:57:07.385789  285135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:57:07.385864  285135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:57:07.400681  285135 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:57:07.748625  285135 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:59:05.132262  285135 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 13:59:05.132396  285135 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0407 13:59:05.133942  285135 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 13:59:05.134010  285135 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:59:05.134138  285135 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:59:05.134279  285135 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:59:05.134430  285135 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 13:59:05.134516  285135 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:59:05.136228  285135 out.go:235]   - Generating certificates and keys ...
	I0407 13:59:05.136301  285135 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:59:05.136351  285135 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:59:05.136408  285135 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 13:59:05.136476  285135 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 13:59:05.136534  285135 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 13:59:05.136573  285135 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 13:59:05.136652  285135 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 13:59:05.136824  285135 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-222032 localhost] and IPs [192.168.50.198 127.0.0.1 ::1]
	I0407 13:59:05.136895  285135 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 13:59:05.137036  285135 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-222032 localhost] and IPs [192.168.50.198 127.0.0.1 ::1]
	I0407 13:59:05.137126  285135 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 13:59:05.137183  285135 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 13:59:05.137235  285135 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 13:59:05.137282  285135 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:59:05.137337  285135 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:59:05.137402  285135 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:59:05.137477  285135 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:59:05.137550  285135 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:59:05.137710  285135 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:59:05.137871  285135 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:59:05.137932  285135 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:59:05.138027  285135 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:59:05.139609  285135 out.go:235]   - Booting up control plane ...
	I0407 13:59:05.139710  285135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:59:05.139842  285135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:59:05.139972  285135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:59:05.140065  285135 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:59:05.140186  285135 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 13:59:05.140230  285135 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 13:59:05.140309  285135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:59:05.140556  285135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:59:05.140663  285135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:59:05.140852  285135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:59:05.140951  285135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:59:05.141154  285135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:59:05.141228  285135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:59:05.141379  285135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:59:05.141446  285135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:59:05.141672  285135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:59:05.141680  285135 kubeadm.go:310] 
	I0407 13:59:05.141713  285135 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 13:59:05.141746  285135 kubeadm.go:310] 		timed out waiting for the condition
	I0407 13:59:05.141764  285135 kubeadm.go:310] 
	I0407 13:59:05.141802  285135 kubeadm.go:310] 	This error is likely caused by:
	I0407 13:59:05.141831  285135 kubeadm.go:310] 		- The kubelet is not running
	I0407 13:59:05.141946  285135 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 13:59:05.141954  285135 kubeadm.go:310] 
	I0407 13:59:05.142060  285135 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 13:59:05.142109  285135 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 13:59:05.142160  285135 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 13:59:05.142170  285135 kubeadm.go:310] 
	I0407 13:59:05.142300  285135 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 13:59:05.142389  285135 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 13:59:05.142397  285135 kubeadm.go:310] 
	I0407 13:59:05.142561  285135 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 13:59:05.142643  285135 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 13:59:05.142733  285135 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 13:59:05.142828  285135 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 13:59:05.142871  285135 kubeadm.go:310] 
	W0407 13:59:05.142992  285135 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-222032 localhost] and IPs [192.168.50.198 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-222032 localhost] and IPs [192.168.50.198 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-222032 localhost] and IPs [192.168.50.198 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-222032 localhost] and IPs [192.168.50.198 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0407 13:59:05.143037  285135 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 13:59:07.071316  285135 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.928249101s)
	I0407 13:59:07.071405  285135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:59:07.086370  285135 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:59:07.098325  285135 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:59:07.098346  285135 kubeadm.go:157] found existing configuration files:
	
	I0407 13:59:07.098393  285135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:59:07.109531  285135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:59:07.109584  285135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:59:07.119894  285135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:59:07.129893  285135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:59:07.129963  285135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:59:07.140249  285135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:59:07.150216  285135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:59:07.150286  285135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:59:07.159836  285135 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:59:07.169137  285135 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:59:07.169204  285135 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:59:07.178655  285135 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:59:07.250088  285135 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 13:59:07.250401  285135 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 13:59:07.388628  285135 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 13:59:07.388939  285135 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 13:59:07.389230  285135 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 13:59:07.604792  285135 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 13:59:07.606765  285135 out.go:235]   - Generating certificates and keys ...
	I0407 13:59:07.606870  285135 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 13:59:07.606932  285135 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 13:59:07.607000  285135 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 13:59:07.607101  285135 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 13:59:07.607212  285135 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 13:59:07.607301  285135 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 13:59:07.607392  285135 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 13:59:07.607607  285135 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 13:59:07.607976  285135 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 13:59:07.608325  285135 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 13:59:07.608410  285135 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 13:59:07.608507  285135 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 13:59:07.803790  285135 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 13:59:07.909543  285135 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 13:59:08.038294  285135 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 13:59:08.209247  285135 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 13:59:08.229592  285135 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 13:59:08.234428  285135 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 13:59:08.234668  285135 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 13:59:08.374256  285135 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 13:59:08.375949  285135 out.go:235]   - Booting up control plane ...
	I0407 13:59:08.376102  285135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 13:59:08.384499  285135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 13:59:08.387008  285135 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 13:59:08.387188  285135 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 13:59:08.389618  285135 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 13:59:48.391154  285135 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 13:59:48.391727  285135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:59:48.391975  285135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 13:59:53.392598  285135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 13:59:53.392816  285135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:00:03.393575  285135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:00:03.393800  285135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:00:23.395103  285135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:00:23.395414  285135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:01:03.394279  285135 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:01:03.394549  285135 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:01:03.394571  285135 kubeadm.go:310] 
	I0407 14:01:03.394633  285135 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 14:01:03.394684  285135 kubeadm.go:310] 		timed out waiting for the condition
	I0407 14:01:03.394695  285135 kubeadm.go:310] 
	I0407 14:01:03.394763  285135 kubeadm.go:310] 	This error is likely caused by:
	I0407 14:01:03.394829  285135 kubeadm.go:310] 		- The kubelet is not running
	I0407 14:01:03.394982  285135 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 14:01:03.394997  285135 kubeadm.go:310] 
	I0407 14:01:03.395129  285135 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 14:01:03.395186  285135 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 14:01:03.395230  285135 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 14:01:03.395238  285135 kubeadm.go:310] 
	I0407 14:01:03.395392  285135 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 14:01:03.395528  285135 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 14:01:03.395553  285135 kubeadm.go:310] 
	I0407 14:01:03.395719  285135 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 14:01:03.395843  285135 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 14:01:03.395954  285135 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 14:01:03.396082  285135 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 14:01:03.396099  285135 kubeadm.go:310] 
	I0407 14:01:03.396679  285135 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:01:03.396801  285135 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 14:01:03.396960  285135 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0407 14:01:03.397063  285135 kubeadm.go:394] duration metric: took 3m56.174576594s to StartCluster
	I0407 14:01:03.397123  285135 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:01:03.397200  285135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:01:03.456107  285135 cri.go:89] found id: ""
	I0407 14:01:03.456136  285135 logs.go:282] 0 containers: []
	W0407 14:01:03.456147  285135 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:01:03.456156  285135 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:01:03.456255  285135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:01:03.507576  285135 cri.go:89] found id: ""
	I0407 14:01:03.507611  285135 logs.go:282] 0 containers: []
	W0407 14:01:03.507631  285135 logs.go:284] No container was found matching "etcd"
	I0407 14:01:03.507641  285135 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:01:03.507717  285135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:01:03.551926  285135 cri.go:89] found id: ""
	I0407 14:01:03.551958  285135 logs.go:282] 0 containers: []
	W0407 14:01:03.551968  285135 logs.go:284] No container was found matching "coredns"
	I0407 14:01:03.551975  285135 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:01:03.552048  285135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:01:03.593434  285135 cri.go:89] found id: ""
	I0407 14:01:03.593476  285135 logs.go:282] 0 containers: []
	W0407 14:01:03.593490  285135 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:01:03.593501  285135 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:01:03.593578  285135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:01:03.636469  285135 cri.go:89] found id: ""
	I0407 14:01:03.636503  285135 logs.go:282] 0 containers: []
	W0407 14:01:03.636513  285135 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:01:03.636520  285135 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:01:03.636575  285135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:01:03.673385  285135 cri.go:89] found id: ""
	I0407 14:01:03.673420  285135 logs.go:282] 0 containers: []
	W0407 14:01:03.673432  285135 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:01:03.673441  285135 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:01:03.673514  285135 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:01:03.713596  285135 cri.go:89] found id: ""
	I0407 14:01:03.713635  285135 logs.go:282] 0 containers: []
	W0407 14:01:03.713649  285135 logs.go:284] No container was found matching "kindnet"
	I0407 14:01:03.713663  285135 logs.go:123] Gathering logs for kubelet ...
	I0407 14:01:03.713683  285135 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:01:03.778821  285135 logs.go:123] Gathering logs for dmesg ...
	I0407 14:01:03.778875  285135 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:01:03.800283  285135 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:01:03.800324  285135 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:01:03.951738  285135 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:01:03.951770  285135 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:01:03.951795  285135 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:01:04.060952  285135 logs.go:123] Gathering logs for container status ...
	I0407 14:01:04.060995  285135 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0407 14:01:04.104325  285135 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0407 14:01:04.104393  285135 out.go:270] * 
	* 
	W0407 14:01:04.104478  285135 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 14:01:04.104494  285135 out.go:270] * 
	* 
	W0407 14:01:04.105413  285135 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 14:01:04.108598  285135 out.go:201] 
	W0407 14:01:04.110260  285135 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 14:01:04.110309  285135 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0407 14:01:04.110341  285135 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0407 14:01:04.111835  285135 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-222032 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-222032
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-222032: (2.446537505s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-222032 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-222032 status --format={{.Host}}: exit status 7 (80.200684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-222032 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-222032 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.091526951s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-222032 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-222032 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-222032 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (89.743285ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-222032] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-222032
	    minikube start -p kubernetes-upgrade-222032 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2220322 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-222032 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-222032 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-222032 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (14.496055117s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-04-07 14:02:02.448359223 +0000 UTC m=+4005.839813383
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-222032 -n kubernetes-upgrade-222032
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-222032 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-222032 logs -n 25: (1.644120458s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | stopped-upgrade-360931 stop           | minikube                  | jenkins | v1.26.0 | 07 Apr 25 13:57 UTC | 07 Apr 25 13:57 UTC |
	| start   | -p stopped-upgrade-360931             | stopped-upgrade-360931    | jenkins | v1.35.0 | 07 Apr 25 13:57 UTC | 07 Apr 25 13:58 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-812476 sudo           | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-812476                | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	| start   | -p NoKubernetes-812476                | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-360931             | stopped-upgrade-360931    | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	| start   | -p cert-expiration-837665             | cert-expiration-837665    | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:59 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-017658             | running-upgrade-017658    | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	| start   | -p force-systemd-flag-939490          | force-systemd-flag-939490 | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 14:00 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-812476 sudo           | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-812476                | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	| start   | -p cert-options-574980                | cert-options-574980       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 14:00 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-939490 ssh cat     | force-systemd-flag-939490 | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-939490          | force-systemd-flag-939490 | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	| start   | -p pause-440331 --memory=2048         | pause-440331              | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:01 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-574980 ssh               | cert-options-574980       | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-574980 -- sudo        | cert-options-574980       | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-574980                | cert-options-574980       | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	| start   | -p auto-471753 --memory=3072          | auto-471753               | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:01 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:01 UTC |
	| start   | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:01 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-440331                       | pause-440331              | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:02 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-471753 pgrep -a               | auto-471753               | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:01 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 14:01:47
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 14:01:47.997756  290352 out.go:345] Setting OutFile to fd 1 ...
	I0407 14:01:47.999029  290352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:01:47.999048  290352 out.go:358] Setting ErrFile to fd 2...
	I0407 14:01:47.999056  290352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:01:47.999427  290352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 14:01:48.000379  290352 out.go:352] Setting JSON to false
	I0407 14:01:48.001476  290352 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":20655,"bootTime":1744013853,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 14:01:48.001572  290352 start.go:139] virtualization: kvm guest
	I0407 14:01:48.003007  290352 out.go:177] * [kubernetes-upgrade-222032] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 14:01:48.004673  290352 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 14:01:48.004700  290352 notify.go:220] Checking for updates...
	I0407 14:01:48.007344  290352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 14:01:48.008596  290352 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:01:48.009958  290352 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:01:48.011234  290352 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 14:01:48.012487  290352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 14:01:48.014058  290352 config.go:182] Loaded profile config "kubernetes-upgrade-222032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:01:48.014526  290352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:01:48.014616  290352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:01:48.029595  290352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42605
	I0407 14:01:48.030131  290352 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:01:48.030753  290352 main.go:141] libmachine: Using API Version  1
	I0407 14:01:48.030778  290352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:01:48.031163  290352 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:01:48.031377  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 14:01:48.031689  290352 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 14:01:48.032180  290352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:01:48.032226  290352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:01:48.050286  290352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34037
	I0407 14:01:48.050860  290352 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:01:48.051504  290352 main.go:141] libmachine: Using API Version  1
	I0407 14:01:48.051561  290352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:01:48.052077  290352 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:01:48.052315  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 14:01:48.089582  290352 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 14:01:48.090748  290352 start.go:297] selected driver: kvm2
	I0407 14:01:48.090765  290352 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-222032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 C
lusterName:kubernetes-upgrade-222032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.198 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:01:48.090882  290352 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 14:01:48.091873  290352 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:01:48.092018  290352 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 14:01:48.109371  290352 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 14:01:48.109846  290352 cni.go:84] Creating CNI manager for ""
	I0407 14:01:48.109904  290352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:01:48.109950  290352 start.go:340] cluster config:
	{Name:kubernetes-upgrade-222032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-222032 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.198 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:01:48.110091  290352 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:01:48.111564  290352 out.go:177] * Starting "kubernetes-upgrade-222032" primary control-plane node in "kubernetes-upgrade-222032" cluster
	I0407 14:01:48.112514  290352 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 14:01:48.112561  290352 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 14:01:48.112571  290352 cache.go:56] Caching tarball of preloaded images
	I0407 14:01:48.112678  290352 preload.go:172] Found /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 14:01:48.112693  290352 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 14:01:48.112840  290352 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/config.json ...
	I0407 14:01:48.113064  290352 start.go:360] acquireMachinesLock for kubernetes-upgrade-222032: {Name:mkbc0d9211b04d7c322a45485d144adcd6ee59fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:01:48.113116  290352 start.go:364] duration metric: took 31.403µs to acquireMachinesLock for "kubernetes-upgrade-222032"
	I0407 14:01:48.113137  290352 start.go:96] Skipping create...Using existing machine configuration
	I0407 14:01:48.113147  290352 fix.go:54] fixHost starting: 
	I0407 14:01:48.113535  290352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:01:48.113584  290352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:01:48.130208  290352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0407 14:01:48.130783  290352 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:01:48.131346  290352 main.go:141] libmachine: Using API Version  1
	I0407 14:01:48.131375  290352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:01:48.131769  290352 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:01:48.131960  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 14:01:48.132197  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetState
	I0407 14:01:48.133996  290352 fix.go:112] recreateIfNeeded on kubernetes-upgrade-222032: state=Running err=<nil>
	W0407 14:01:48.134014  290352 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 14:01:48.135560  290352 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-222032" VM ...
	I0407 14:01:46.657513  289526 pod_ready.go:103] pod "coredns-668d6bf9bc-gpf6x" in "kube-system" namespace has status "Ready":"False"
	I0407 14:01:49.157664  289526 pod_ready.go:103] pod "coredns-668d6bf9bc-gpf6x" in "kube-system" namespace has status "Ready":"False"
	I0407 14:01:48.136932  290352 machine.go:93] provisionDockerMachine start ...
	I0407 14:01:48.136953  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 14:01:48.137220  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 14:01:48.140315  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.140867  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:48.140890  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.141049  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 14:01:48.141239  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:48.141413  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:48.141549  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 14:01:48.141724  290352 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:48.141962  290352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0407 14:01:48.141978  290352 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 14:01:48.259137  290352 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-222032
	
	I0407 14:01:48.259174  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetMachineName
	I0407 14:01:48.259440  290352 buildroot.go:166] provisioning hostname "kubernetes-upgrade-222032"
	I0407 14:01:48.259474  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetMachineName
	I0407 14:01:48.259642  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 14:01:48.262456  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.262859  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:48.262898  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.263169  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 14:01:48.263378  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:48.263570  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:48.263713  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 14:01:48.263878  290352 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:48.264095  290352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0407 14:01:48.264114  290352 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-222032 && echo "kubernetes-upgrade-222032" | sudo tee /etc/hostname
	I0407 14:01:48.387326  290352 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-222032
	
	I0407 14:01:48.387357  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 14:01:48.390558  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.391077  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:48.391125  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.391326  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 14:01:48.391523  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:48.391695  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:48.391855  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 14:01:48.392028  290352 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:48.392231  290352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0407 14:01:48.392248  290352 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-222032' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-222032/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-222032' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:01:48.501484  290352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:01:48.501516  290352 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20598-242355/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-242355/.minikube}
	I0407 14:01:48.501544  290352 buildroot.go:174] setting up certificates
	I0407 14:01:48.501555  290352 provision.go:84] configureAuth start
	I0407 14:01:48.501568  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetMachineName
	I0407 14:01:48.501940  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetIP
	I0407 14:01:48.504679  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.505019  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:48.505050  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.505216  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 14:01:48.507285  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.507626  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:48.507655  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.507740  290352 provision.go:143] copyHostCerts
	I0407 14:01:48.507818  290352 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem, removing ...
	I0407 14:01:48.507842  290352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem
	I0407 14:01:48.507894  290352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem (1679 bytes)
	I0407 14:01:48.507982  290352 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem, removing ...
	I0407 14:01:48.507990  290352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem
	I0407 14:01:48.508012  290352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem (1078 bytes)
	I0407 14:01:48.508085  290352 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem, removing ...
	I0407 14:01:48.508092  290352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem
	I0407 14:01:48.508116  290352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem (1123 bytes)
	I0407 14:01:48.508167  290352 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-222032 san=[127.0.0.1 192.168.50.198 kubernetes-upgrade-222032 localhost minikube]
	I0407 14:01:48.554212  290352 provision.go:177] copyRemoteCerts
	I0407 14:01:48.554274  290352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:01:48.554300  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 14:01:48.556829  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.557121  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:48.557155  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.557294  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 14:01:48.557473  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:48.557624  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 14:01:48.557732  290352 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/id_rsa Username:docker}
	I0407 14:01:48.643719  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0407 14:01:48.673248  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 14:01:48.699490  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:01:48.727657  290352 provision.go:87] duration metric: took 226.087558ms to configureAuth
	I0407 14:01:48.727706  290352 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:01:48.727962  290352 config.go:182] Loaded profile config "kubernetes-upgrade-222032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:01:48.728064  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 14:01:48.730817  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.731224  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:48.731266  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:48.731499  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 14:01:48.731663  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:48.731820  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:48.731952  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 14:01:48.732094  290352 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:48.732335  290352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0407 14:01:48.732353  290352 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 14:01:49.643378  290352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 14:01:49.643411  290352 machine.go:96] duration metric: took 1.506462601s to provisionDockerMachine
	I0407 14:01:49.643427  290352 start.go:293] postStartSetup for "kubernetes-upgrade-222032" (driver="kvm2")
	I0407 14:01:49.643441  290352 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:01:49.643460  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 14:01:49.643851  290352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:01:49.643891  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 14:01:49.646784  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:49.647136  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:49.647160  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:49.647376  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 14:01:49.647617  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:49.647751  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 14:01:49.647935  290352 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/id_rsa Username:docker}
	I0407 14:01:49.731440  290352 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:01:49.735938  290352 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:01:49.735971  290352 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/addons for local assets ...
	I0407 14:01:49.736071  290352 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/files for local assets ...
	I0407 14:01:49.736169  290352 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem -> 2495162.pem in /etc/ssl/certs
	I0407 14:01:49.736285  290352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:01:49.745841  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:01:49.776047  290352 start.go:296] duration metric: took 132.602959ms for postStartSetup
	I0407 14:01:49.776099  290352 fix.go:56] duration metric: took 1.662952647s for fixHost
	I0407 14:01:49.776130  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 14:01:49.779035  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:49.779433  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:49.779462  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:49.779610  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 14:01:49.779806  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:49.779991  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:49.780217  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 14:01:49.780460  290352 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:49.780758  290352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0407 14:01:49.780770  290352 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 14:01:50.005404  290352 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744034509.993806173
	
	I0407 14:01:50.005431  290352 fix.go:216] guest clock: 1744034509.993806173
	I0407 14:01:50.005439  290352 fix.go:229] Guest: 2025-04-07 14:01:49.993806173 +0000 UTC Remote: 2025-04-07 14:01:49.776109741 +0000 UTC m=+1.819470381 (delta=217.696432ms)
	I0407 14:01:50.005459  290352 fix.go:200] guest clock delta is within tolerance: 217.696432ms
	I0407 14:01:50.005465  290352 start.go:83] releasing machines lock for "kubernetes-upgrade-222032", held for 1.892337415s
	I0407 14:01:50.005489  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 14:01:50.005877  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetIP
	I0407 14:01:50.008986  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:50.009362  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:50.009397  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:50.009527  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 14:01:50.010086  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 14:01:50.010311  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .DriverName
	I0407 14:01:50.010390  290352 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 14:01:50.010429  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 14:01:50.010505  290352 ssh_runner.go:195] Run: cat /version.json
	I0407 14:01:50.010537  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHHostname
	I0407 14:01:50.013339  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:50.013662  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:50.013791  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:50.013816  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:50.013971  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 14:01:50.014276  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:50.014316  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:50.014354  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:50.014550  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 14:01:50.014589  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHPort
	I0407 14:01:50.014746  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHKeyPath
	I0407 14:01:50.014788  290352 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/id_rsa Username:docker}
	I0407 14:01:50.014877  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetSSHUsername
	I0407 14:01:50.015032  290352 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/kubernetes-upgrade-222032/id_rsa Username:docker}
	I0407 14:01:50.175200  290352 ssh_runner.go:195] Run: systemctl --version
	I0407 14:01:50.236814  290352 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 14:01:50.468143  290352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 14:01:50.475481  290352 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 14:01:50.475550  290352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 14:01:50.490261  290352 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0407 14:01:50.490290  290352 start.go:495] detecting cgroup driver to use...
	I0407 14:01:50.490353  290352 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 14:01:50.514836  290352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:01:50.530395  290352 docker.go:217] disabling cri-docker service (if available) ...
	I0407 14:01:50.530464  290352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 14:01:50.551998  290352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 14:01:50.572066  290352 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 14:01:50.776803  290352 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 14:01:50.976095  290352 docker.go:233] disabling docker service ...
	I0407 14:01:50.976172  290352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 14:01:50.995232  290352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 14:01:51.013351  290352 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 14:01:51.202011  290352 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 14:01:51.373052  290352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 14:01:51.387476  290352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 14:01:51.406212  290352 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 14:01:51.406284  290352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:51.419512  290352 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 14:01:51.419587  290352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:51.431708  290352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:51.445384  290352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:51.458914  290352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 14:01:51.472843  290352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:51.487183  290352 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:51.500632  290352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:51.511689  290352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 14:01:51.521764  290352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 14:01:51.531924  290352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:01:51.744767  290352 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 14:01:52.091024  290352 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 14:01:52.091115  290352 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 14:01:52.096781  290352 start.go:563] Will wait 60s for crictl version
	I0407 14:01:52.096879  290352 ssh_runner.go:195] Run: which crictl
	I0407 14:01:52.101010  290352 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 14:01:52.137471  290352 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 14:01:52.137543  290352 ssh_runner.go:195] Run: crio --version
	I0407 14:01:52.167843  290352 ssh_runner.go:195] Run: crio --version
	I0407 14:01:52.201007  290352 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 14:01:52.202337  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) Calling .GetIP
	I0407 14:01:52.205728  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:52.206086  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:55:58", ip: ""} in network mk-kubernetes-upgrade-222032: {Iface:virbr2 ExpiryTime:2025-04-07 15:01:19 +0000 UTC Type:0 Mac:52:54:00:17:55:58 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:kubernetes-upgrade-222032 Clientid:01:52:54:00:17:55:58}
	I0407 14:01:52.206115  290352 main.go:141] libmachine: (kubernetes-upgrade-222032) DBG | domain kubernetes-upgrade-222032 has defined IP address 192.168.50.198 and MAC address 52:54:00:17:55:58 in network mk-kubernetes-upgrade-222032
	I0407 14:01:52.206346  290352 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0407 14:01:52.210692  290352 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-222032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kube
rnetes-upgrade-222032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.198 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 14:01:52.210791  290352 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 14:01:52.210854  290352 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:01:52.254131  290352 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 14:01:52.254158  290352 crio.go:433] Images already preloaded, skipping extraction
	I0407 14:01:52.254210  290352 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:01:52.288907  290352 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 14:01:52.288935  290352 cache_images.go:84] Images are preloaded, skipping loading
	I0407 14:01:52.288943  290352 kubeadm.go:934] updating node { 192.168.50.198 8443 v1.32.2 crio true true} ...
	I0407 14:01:52.289063  290352 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-222032 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-222032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 14:01:52.289148  290352 ssh_runner.go:195] Run: crio config
	I0407 14:01:52.343919  290352 cni.go:84] Creating CNI manager for ""
	I0407 14:01:52.343943  290352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:01:52.343957  290352 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 14:01:52.343983  290352 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.198 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-222032 NodeName:kubernetes-upgrade-222032 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 14:01:52.344136  290352 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-222032"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.198"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.198"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 14:01:52.344212  290352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 14:01:52.355185  290352 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 14:01:52.355275  290352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 14:01:52.366404  290352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0407 14:01:52.384632  290352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 14:01:52.401229  290352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0407 14:01:52.417804  290352 ssh_runner.go:195] Run: grep 192.168.50.198	control-plane.minikube.internal$ /etc/hosts
	I0407 14:01:52.421718  290352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:01:52.552041  290352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:01:52.568031  290352 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032 for IP: 192.168.50.198
	I0407 14:01:52.568058  290352 certs.go:194] generating shared ca certs ...
	I0407 14:01:52.568077  290352 certs.go:226] acquiring lock for ca certs: {Name:mk1da0e2436b5b22d130d00c7c348c272ee34f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:01:52.568257  290352 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key
	I0407 14:01:52.568309  290352 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key
	I0407 14:01:52.568323  290352 certs.go:256] generating profile certs ...
	I0407 14:01:52.568482  290352 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/client.key
	I0407 14:01:52.568554  290352 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.key.4bde82e8
	I0407 14:01:52.568605  290352 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/proxy-client.key
	I0407 14:01:52.568764  290352 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem (1338 bytes)
	W0407 14:01:52.568825  290352 certs.go:480] ignoring /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516_empty.pem, impossibly tiny 0 bytes
	I0407 14:01:52.568835  290352 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 14:01:52.568869  290352 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem (1078 bytes)
	I0407 14:01:52.568906  290352 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem (1123 bytes)
	I0407 14:01:52.568942  290352 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem (1679 bytes)
	I0407 14:01:52.569013  290352 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:01:52.569855  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 14:01:52.595715  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 14:01:52.621517  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 14:01:52.647222  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 14:01:52.672469  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0407 14:01:52.697973  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0407 14:01:52.724210  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 14:01:52.751067  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kubernetes-upgrade-222032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 14:01:52.778983  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 14:01:52.808182  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem --> /usr/share/ca-certificates/249516.pem (1338 bytes)
	I0407 14:01:52.833706  290352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /usr/share/ca-certificates/2495162.pem (1708 bytes)
	I0407 14:01:52.863054  290352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 14:01:52.982054  290352 ssh_runner.go:195] Run: openssl version
	I0407 14:01:51.657016  289526 pod_ready.go:103] pod "coredns-668d6bf9bc-gpf6x" in "kube-system" namespace has status "Ready":"False"
	I0407 14:01:53.657454  289526 pod_ready.go:103] pod "coredns-668d6bf9bc-gpf6x" in "kube-system" namespace has status "Ready":"False"
	I0407 14:01:54.217360  290166 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9 6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240 a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0 c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b 176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750 544b31bc057687685fecaddbd5638547f3a537376146f8f143230e70285b50ff d7665d063cf00f65f720b1cfc81c56280670bc8f16568e8e8b72398ca6d2bddd fe11750f0d883460637b0590845196f37b9472187050b5cf7e9d9e8c62902064 907ea441d33ea9bd60464c5c56a3078dbe7d1aef6b5744ed0f7d31481613b0ac: (10.635833991s)
	W0407 14:01:54.217462  290166 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9 6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240 a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0 c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b 176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750 544b31bc057687685fecaddbd5638547f3a537376146f8f143230e70285b50ff d7665d063cf00f65f720b1cfc81c56280670bc8f16568e8e8b72398ca6d2bddd fe11750f0d883460637b0590845196f37b9472187050b5cf7e9d9e8c62902064 907ea441d33ea9bd60464c5c56a3078dbe7d1aef6b5744ed0f7d31481613b0ac: Process exited with status 1
	stdout:
	0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9
	6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240
	a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0
	c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a
	f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b
	176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750
	544b31bc057687685fecaddbd5638547f3a537376146f8f143230e70285b50ff
	d7665d063cf00f65f720b1cfc81c56280670bc8f16568e8e8b72398ca6d2bddd
	
	stderr:
	E0407 14:01:54.211999    3098 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe11750f0d883460637b0590845196f37b9472187050b5cf7e9d9e8c62902064\": container with ID starting with fe11750f0d883460637b0590845196f37b9472187050b5cf7e9d9e8c62902064 not found: ID does not exist" containerID="fe11750f0d883460637b0590845196f37b9472187050b5cf7e9d9e8c62902064"
	time="2025-04-07T14:01:54Z" level=fatal msg="stopping the container \"fe11750f0d883460637b0590845196f37b9472187050b5cf7e9d9e8c62902064\": rpc error: code = NotFound desc = could not find container \"fe11750f0d883460637b0590845196f37b9472187050b5cf7e9d9e8c62902064\": container with ID starting with fe11750f0d883460637b0590845196f37b9472187050b5cf7e9d9e8c62902064 not found: ID does not exist"
	I0407 14:01:54.217545  290166 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0407 14:01:54.270264  290166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:01:54.282838  290166 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Apr  7 14:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Apr  7 14:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Apr  7 14:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Apr  7 14:00 /etc/kubernetes/scheduler.conf
	
	I0407 14:01:54.282928  290166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:01:54.294757  290166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:01:54.307034  290166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:01:54.318492  290166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0407 14:01:54.318584  290166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:01:54.334205  290166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:01:54.345180  290166 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0407 14:01:54.345255  290166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:01:54.356135  290166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 14:01:54.367349  290166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:01:54.428238  290166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:01:55.561328  290166 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.133041185s)
	I0407 14:01:55.561372  290166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:01:55.809408  290166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:01:55.887942  290166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:01:55.973561  290166 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:01:55.973659  290166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:01:56.474134  290166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:01:56.974562  290166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:01:56.991054  290166 api_server.go:72] duration metric: took 1.017492372s to wait for apiserver process to appear ...
	I0407 14:01:56.991087  290166 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:01:56.991110  290166 api_server.go:253] Checking apiserver healthz at https://192.168.61.76:8443/healthz ...
	I0407 14:01:53.009446  290352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2495162.pem && ln -fs /usr/share/ca-certificates/2495162.pem /etc/ssl/certs/2495162.pem"
	I0407 14:01:53.087358  290352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2495162.pem
	I0407 14:01:53.105352  290352 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 13:03 /usr/share/ca-certificates/2495162.pem
	I0407 14:01:53.105448  290352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2495162.pem
	I0407 14:01:53.116538  290352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2495162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 14:01:53.144058  290352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 14:01:53.184281  290352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:01:53.201853  290352 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:01:53.201921  290352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:01:53.209498  290352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 14:01:53.220862  290352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/249516.pem && ln -fs /usr/share/ca-certificates/249516.pem /etc/ssl/certs/249516.pem"
	I0407 14:01:53.232794  290352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/249516.pem
	I0407 14:01:53.237949  290352 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 13:03 /usr/share/ca-certificates/249516.pem
	I0407 14:01:53.238016  290352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/249516.pem
	I0407 14:01:53.244095  290352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/249516.pem /etc/ssl/certs/51391683.0"
	I0407 14:01:53.257486  290352 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 14:01:53.263877  290352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 14:01:53.271035  290352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 14:01:53.277066  290352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 14:01:53.283103  290352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 14:01:53.289101  290352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 14:01:53.296802  290352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 14:01:53.302814  290352 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-222032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kuberne
tes-upgrade-222032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.198 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:01:53.302919  290352 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 14:01:53.302962  290352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 14:01:53.356370  290352 cri.go:89] found id: "28cc132fb1181ef29231ed3ea79e1720673e69abd88848197bbecca8cc5e3623"
	I0407 14:01:53.356403  290352 cri.go:89] found id: "b0371c7e1b27d188adf577aab71321fb124c55bb95418e50a5c20300608fee02"
	I0407 14:01:53.356409  290352 cri.go:89] found id: "5f51fd61e1e2150f4d9c8f430315de1534a32e2b9a1d10f0d409dafaa2d1ca91"
	I0407 14:01:53.356414  290352 cri.go:89] found id: "e0a819d747be65bc08ffd398c1b9d6269931cdc7d4b3c4c01b44f801e27508cd"
	I0407 14:01:53.356437  290352 cri.go:89] found id: "2bc7617a545553e26ad4311278881d4a4ee58a22bdbeb48180325884edc32e2d"
	I0407 14:01:53.356442  290352 cri.go:89] found id: ""
	I0407 14:01:53.356506  290352 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-222032 -n kubernetes-upgrade-222032
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-222032 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-222032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-222032
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-222032: (1.202022912s)
--- FAIL: TestKubernetesUpgrade (392.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-440331 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-440331 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.219557125s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-440331] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-440331" primary control-plane node in "pause-440331" cluster
	* Updating the running kvm2 "pause-440331" VM ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-440331" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 14:01:32.568842  290166 out.go:345] Setting OutFile to fd 1 ...
	I0407 14:01:32.569124  290166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:01:32.569138  290166 out.go:358] Setting ErrFile to fd 2...
	I0407 14:01:32.569145  290166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:01:32.569376  290166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 14:01:32.570182  290166 out.go:352] Setting JSON to false
	I0407 14:01:32.571555  290166 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":20640,"bootTime":1744013853,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 14:01:32.571648  290166 start.go:139] virtualization: kvm guest
	I0407 14:01:32.573532  290166 out.go:177] * [pause-440331] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 14:01:32.574852  290166 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 14:01:32.574868  290166 notify.go:220] Checking for updates...
	I0407 14:01:32.577786  290166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 14:01:32.579091  290166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:01:32.580330  290166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:01:32.581628  290166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 14:01:32.582881  290166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 14:01:32.584619  290166 config.go:182] Loaded profile config "pause-440331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:01:32.585275  290166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:01:32.585371  290166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:01:32.602588  290166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I0407 14:01:32.603145  290166 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:01:32.603834  290166 main.go:141] libmachine: Using API Version  1
	I0407 14:01:32.603853  290166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:01:32.604274  290166 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:01:32.604690  290166 main.go:141] libmachine: (pause-440331) Calling .DriverName
	I0407 14:01:32.605038  290166 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 14:01:32.605488  290166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:01:32.605532  290166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:01:32.622388  290166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0407 14:01:32.623273  290166 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:01:32.623941  290166 main.go:141] libmachine: Using API Version  1
	I0407 14:01:32.623969  290166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:01:32.624368  290166 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:01:32.624623  290166 main.go:141] libmachine: (pause-440331) Calling .DriverName
	I0407 14:01:32.667783  290166 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 14:01:32.669158  290166 start.go:297] selected driver: kvm2
	I0407 14:01:32.669183  290166 start.go:901] validating driver "kvm2" against &{Name:pause-440331 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pa
use-440331 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.76 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:
false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:01:32.669401  290166 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 14:01:32.669785  290166 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:01:32.669868  290166 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 14:01:32.693301  290166 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 14:01:32.694380  290166 cni.go:84] Creating CNI manager for ""
	I0407 14:01:32.694453  290166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:01:32.694521  290166 start.go:340] cluster config:
	{Name:pause-440331 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-440331 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.76 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliase
s:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:01:32.694714  290166 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:01:32.696694  290166 out.go:177] * Starting "pause-440331" primary control-plane node in "pause-440331" cluster
	I0407 14:01:32.697957  290166 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 14:01:32.698004  290166 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 14:01:32.698019  290166 cache.go:56] Caching tarball of preloaded images
	I0407 14:01:32.698104  290166 preload.go:172] Found /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 14:01:32.698118  290166 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 14:01:32.698240  290166 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/pause-440331/config.json ...
	I0407 14:01:32.698427  290166 start.go:360] acquireMachinesLock for pause-440331: {Name:mkbc0d9211b04d7c322a45485d144adcd6ee59fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:01:32.698467  290166 start.go:364] duration metric: took 25.117µs to acquireMachinesLock for "pause-440331"
	I0407 14:01:32.698479  290166 start.go:96] Skipping create...Using existing machine configuration
	I0407 14:01:32.698484  290166 fix.go:54] fixHost starting: 
	I0407 14:01:32.698814  290166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:01:32.698856  290166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:01:32.717611  290166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I0407 14:01:32.718244  290166 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:01:32.718758  290166 main.go:141] libmachine: Using API Version  1
	I0407 14:01:32.718780  290166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:01:32.719205  290166 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:01:32.719414  290166 main.go:141] libmachine: (pause-440331) Calling .DriverName
	I0407 14:01:32.719585  290166 main.go:141] libmachine: (pause-440331) Calling .GetState
	I0407 14:01:32.721308  290166 fix.go:112] recreateIfNeeded on pause-440331: state=Running err=<nil>
	W0407 14:01:32.721347  290166 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 14:01:32.724187  290166 out.go:177] * Updating the running kvm2 "pause-440331" VM ...
	I0407 14:01:32.725378  290166 machine.go:93] provisionDockerMachine start ...
	I0407 14:01:32.725398  290166 main.go:141] libmachine: (pause-440331) Calling .DriverName
	I0407 14:01:32.725649  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHHostname
	I0407 14:01:32.728148  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:32.728684  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:32.728716  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:32.728866  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHPort
	I0407 14:01:32.729049  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:32.729217  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:32.729335  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHUsername
	I0407 14:01:32.729479  290166 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:32.729738  290166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.76 22 <nil> <nil>}
	I0407 14:01:32.729749  290166 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 14:01:32.841487  290166 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-440331
	
	I0407 14:01:32.841526  290166 main.go:141] libmachine: (pause-440331) Calling .GetMachineName
	I0407 14:01:32.841803  290166 buildroot.go:166] provisioning hostname "pause-440331"
	I0407 14:01:32.841834  290166 main.go:141] libmachine: (pause-440331) Calling .GetMachineName
	I0407 14:01:32.842065  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHHostname
	I0407 14:01:32.845081  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:32.845564  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:32.845594  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:32.845775  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHPort
	I0407 14:01:32.845972  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:32.846121  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:32.846229  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHUsername
	I0407 14:01:32.846384  290166 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:32.846639  290166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.76 22 <nil> <nil>}
	I0407 14:01:32.846655  290166 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-440331 && echo "pause-440331" | sudo tee /etc/hostname
	I0407 14:01:32.981270  290166 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-440331
	
	I0407 14:01:32.981326  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHHostname
	I0407 14:01:32.984814  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:32.985274  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:32.985311  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:32.985488  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHPort
	I0407 14:01:32.985672  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:32.985844  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:32.986025  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHUsername
	I0407 14:01:32.986206  290166 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:32.986437  290166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.76 22 <nil> <nil>}
	I0407 14:01:32.986470  290166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-440331' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-440331/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-440331' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:01:33.111485  290166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:01:33.111526  290166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20598-242355/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-242355/.minikube}
	I0407 14:01:33.111551  290166 buildroot.go:174] setting up certificates
	I0407 14:01:33.111564  290166 provision.go:84] configureAuth start
	I0407 14:01:33.111579  290166 main.go:141] libmachine: (pause-440331) Calling .GetMachineName
	I0407 14:01:33.111973  290166 main.go:141] libmachine: (pause-440331) Calling .GetIP
	I0407 14:01:33.114937  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:33.115307  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:33.115338  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:33.115588  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHHostname
	I0407 14:01:33.118214  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:33.118495  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:33.118527  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:33.118636  290166 provision.go:143] copyHostCerts
	I0407 14:01:33.118696  290166 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem, removing ...
	I0407 14:01:33.118725  290166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem
	I0407 14:01:33.118843  290166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem (1078 bytes)
	I0407 14:01:33.118981  290166 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem, removing ...
	I0407 14:01:33.118990  290166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem
	I0407 14:01:33.119017  290166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem (1123 bytes)
	I0407 14:01:33.119081  290166 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem, removing ...
	I0407 14:01:33.119092  290166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem
	I0407 14:01:33.119119  290166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem (1679 bytes)
	I0407 14:01:33.119170  290166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem org=jenkins.pause-440331 san=[127.0.0.1 192.168.61.76 localhost minikube pause-440331]
	I0407 14:01:33.672799  290166 provision.go:177] copyRemoteCerts
	I0407 14:01:33.672872  290166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:01:33.672903  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHHostname
	I0407 14:01:33.676178  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:33.676581  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:33.676612  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:33.676836  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHPort
	I0407 14:01:33.677061  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:33.677272  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHUsername
	I0407 14:01:33.677466  290166 sshutil.go:53] new ssh client: &{IP:192.168.61.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/pause-440331/id_rsa Username:docker}
	I0407 14:01:33.763970  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:01:33.798408  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0407 14:01:33.846036  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 14:01:33.882898  290166 provision.go:87] duration metric: took 771.314012ms to configureAuth
	I0407 14:01:33.882941  290166 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:01:33.883346  290166 config.go:182] Loaded profile config "pause-440331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:01:33.883500  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHHostname
	I0407 14:01:33.887481  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:33.887946  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:33.887974  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:33.888268  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHPort
	I0407 14:01:33.888492  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:33.888707  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:33.888931  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHUsername
	I0407 14:01:33.889201  290166 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:33.889549  290166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.76 22 <nil> <nil>}
	I0407 14:01:33.889572  290166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 14:01:39.473695  290166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 14:01:39.473724  290166 machine.go:96] duration metric: took 6.748331225s to provisionDockerMachine
	I0407 14:01:39.473738  290166 start.go:293] postStartSetup for "pause-440331" (driver="kvm2")
	I0407 14:01:39.473752  290166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:01:39.473772  290166 main.go:141] libmachine: (pause-440331) Calling .DriverName
	I0407 14:01:39.474161  290166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:01:39.474200  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHHostname
	I0407 14:01:39.477424  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:39.477756  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:39.477789  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:39.477891  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHPort
	I0407 14:01:39.478119  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:39.478282  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHUsername
	I0407 14:01:39.478394  290166 sshutil.go:53] new ssh client: &{IP:192.168.61.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/pause-440331/id_rsa Username:docker}
	I0407 14:01:39.571354  290166 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:01:39.576789  290166 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:01:39.576844  290166 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/addons for local assets ...
	I0407 14:01:39.576903  290166 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/files for local assets ...
	I0407 14:01:39.577003  290166 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem -> 2495162.pem in /etc/ssl/certs
	I0407 14:01:39.577122  290166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:01:39.589521  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:01:39.619322  290166 start.go:296] duration metric: took 145.562378ms for postStartSetup
	I0407 14:01:39.619384  290166 fix.go:56] duration metric: took 6.920898692s for fixHost
	I0407 14:01:39.619414  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHHostname
	I0407 14:01:39.622616  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:39.623128  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:39.623181  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:39.623380  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHPort
	I0407 14:01:39.623574  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:39.623783  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:39.623956  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHUsername
	I0407 14:01:39.624128  290166 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:39.624375  290166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.76 22 <nil> <nil>}
	I0407 14:01:39.624386  290166 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 14:01:39.737531  290166 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744034499.734047360
	
	I0407 14:01:39.737564  290166 fix.go:216] guest clock: 1744034499.734047360
	I0407 14:01:39.737575  290166 fix.go:229] Guest: 2025-04-07 14:01:39.73404736 +0000 UTC Remote: 2025-04-07 14:01:39.619391575 +0000 UTC m=+7.099513299 (delta=114.655785ms)
	I0407 14:01:39.737604  290166 fix.go:200] guest clock delta is within tolerance: 114.655785ms
	I0407 14:01:39.737612  290166 start.go:83] releasing machines lock for "pause-440331", held for 7.039136717s
	I0407 14:01:39.737639  290166 main.go:141] libmachine: (pause-440331) Calling .DriverName
	I0407 14:01:39.737994  290166 main.go:141] libmachine: (pause-440331) Calling .GetIP
	I0407 14:01:39.740829  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:39.741256  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:39.741310  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:39.741450  290166 main.go:141] libmachine: (pause-440331) Calling .DriverName
	I0407 14:01:39.742009  290166 main.go:141] libmachine: (pause-440331) Calling .DriverName
	I0407 14:01:39.742222  290166 main.go:141] libmachine: (pause-440331) Calling .DriverName
	I0407 14:01:39.742322  290166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 14:01:39.742379  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHHostname
	I0407 14:01:39.742473  290166 ssh_runner.go:195] Run: cat /version.json
	I0407 14:01:39.742503  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHHostname
	I0407 14:01:39.745442  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:39.745631  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:39.745866  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:39.745919  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:39.746037  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:39.746083  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:39.746203  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHPort
	I0407 14:01:39.746361  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHPort
	I0407 14:01:39.746387  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:39.746533  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHUsername
	I0407 14:01:39.746550  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHKeyPath
	I0407 14:01:39.746695  290166 main.go:141] libmachine: (pause-440331) Calling .GetSSHUsername
	I0407 14:01:39.746705  290166 sshutil.go:53] new ssh client: &{IP:192.168.61.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/pause-440331/id_rsa Username:docker}
	I0407 14:01:39.746853  290166 sshutil.go:53] new ssh client: &{IP:192.168.61.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/pause-440331/id_rsa Username:docker}
	I0407 14:01:39.849600  290166 ssh_runner.go:195] Run: systemctl --version
	I0407 14:01:39.856740  290166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 14:01:40.022517  290166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 14:01:40.031084  290166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 14:01:40.031182  290166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 14:01:40.044163  290166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0407 14:01:40.044202  290166 start.go:495] detecting cgroup driver to use...
	I0407 14:01:40.044284  290166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 14:01:40.061169  290166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:01:40.076282  290166 docker.go:217] disabling cri-docker service (if available) ...
	I0407 14:01:40.076338  290166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 14:01:40.092834  290166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 14:01:40.109248  290166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 14:01:40.254196  290166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 14:01:40.389701  290166 docker.go:233] disabling docker service ...
	I0407 14:01:40.389776  290166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 14:01:40.418383  290166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 14:01:40.438652  290166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 14:01:40.600908  290166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 14:01:40.769453  290166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 14:01:40.785714  290166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 14:01:40.809354  290166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 14:01:40.809447  290166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:40.823118  290166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 14:01:40.823181  290166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:40.836475  290166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:40.852685  290166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:40.866172  290166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 14:01:40.881738  290166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:40.895491  290166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:40.909801  290166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:01:40.923070  290166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 14:01:40.933991  290166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 14:01:40.945046  290166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:01:41.095859  290166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 14:01:41.320950  290166 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 14:01:41.321045  290166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 14:01:41.326478  290166 start.go:563] Will wait 60s for crictl version
	I0407 14:01:41.326546  290166 ssh_runner.go:195] Run: which crictl
	I0407 14:01:41.330659  290166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 14:01:41.367144  290166 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 14:01:41.367251  290166 ssh_runner.go:195] Run: crio --version
	I0407 14:01:41.396864  290166 ssh_runner.go:195] Run: crio --version
	I0407 14:01:41.431216  290166 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 14:01:41.432413  290166 main.go:141] libmachine: (pause-440331) Calling .GetIP
	I0407 14:01:41.435293  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:41.435624  290166 main.go:141] libmachine: (pause-440331) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:0b:cd", ip: ""} in network mk-pause-440331: {Iface:virbr1 ExpiryTime:2025-04-07 15:00:25 +0000 UTC Type:0 Mac:52:54:00:6a:0b:cd Iaid: IPaddr:192.168.61.76 Prefix:24 Hostname:pause-440331 Clientid:01:52:54:00:6a:0b:cd}
	I0407 14:01:41.435637  290166 main.go:141] libmachine: (pause-440331) DBG | domain pause-440331 has defined IP address 192.168.61.76 and MAC address 52:54:00:6a:0b:cd in network mk-pause-440331
	I0407 14:01:41.435893  290166 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0407 14:01:41.440725  290166 kubeadm.go:883] updating cluster {Name:pause-440331 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-440331 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.76 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secur
ity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 14:01:41.440942  290166 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 14:01:41.441018  290166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:01:41.485238  290166 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 14:01:41.485268  290166 crio.go:433] Images already preloaded, skipping extraction
	I0407 14:01:41.485336  290166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:01:41.525535  290166 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 14:01:41.525565  290166 cache_images.go:84] Images are preloaded, skipping loading
	I0407 14:01:41.525575  290166 kubeadm.go:934] updating node { 192.168.61.76 8443 v1.32.2 crio true true} ...
	I0407 14:01:41.525684  290166 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-440331 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-440331 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 14:01:41.525771  290166 ssh_runner.go:195] Run: crio config
	I0407 14:01:41.577974  290166 cni.go:84] Creating CNI manager for ""
	I0407 14:01:41.578009  290166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:01:41.578021  290166 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 14:01:41.578056  290166 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.76 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-440331 NodeName:pause-440331 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 14:01:41.578242  290166 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-440331"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 14:01:41.578329  290166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 14:01:41.593556  290166 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 14:01:41.593631  290166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 14:01:41.605275  290166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0407 14:01:41.628167  290166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 14:01:41.649080  290166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0407 14:01:41.671162  290166 ssh_runner.go:195] Run: grep 192.168.61.76	control-plane.minikube.internal$ /etc/hosts
	I0407 14:01:41.676209  290166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:01:41.844791  290166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:01:41.861121  290166 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/pause-440331 for IP: 192.168.61.76
	I0407 14:01:41.861152  290166 certs.go:194] generating shared ca certs ...
	I0407 14:01:41.861168  290166 certs.go:226] acquiring lock for ca certs: {Name:mk1da0e2436b5b22d130d00c7c348c272ee34f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:01:41.861356  290166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key
	I0407 14:01:41.861396  290166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key
	I0407 14:01:41.861406  290166 certs.go:256] generating profile certs ...
	I0407 14:01:41.861479  290166 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/pause-440331/client.key
	I0407 14:01:41.861532  290166 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/pause-440331/apiserver.key.3a55403a
	I0407 14:01:41.861572  290166 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/pause-440331/proxy-client.key
	I0407 14:01:41.861675  290166 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem (1338 bytes)
	W0407 14:01:41.861702  290166 certs.go:480] ignoring /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516_empty.pem, impossibly tiny 0 bytes
	I0407 14:01:41.861711  290166 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 14:01:41.861736  290166 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem (1078 bytes)
	I0407 14:01:41.861760  290166 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem (1123 bytes)
	I0407 14:01:41.861781  290166 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem (1679 bytes)
	I0407 14:01:41.861820  290166 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:01:41.862377  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 14:01:41.894863  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 14:01:41.923603  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 14:01:41.956828  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 14:01:41.984515  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/pause-440331/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0407 14:01:42.017210  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/pause-440331/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 14:01:42.048562  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/pause-440331/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 14:01:42.077535  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/pause-440331/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 14:01:42.109256  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /usr/share/ca-certificates/2495162.pem (1708 bytes)
	I0407 14:01:42.178442  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 14:01:42.254970  290166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem --> /usr/share/ca-certificates/249516.pem (1338 bytes)
	I0407 14:01:42.397286  290166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 14:01:42.509748  290166 ssh_runner.go:195] Run: openssl version
	I0407 14:01:42.535586  290166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/249516.pem && ln -fs /usr/share/ca-certificates/249516.pem /etc/ssl/certs/249516.pem"
	I0407 14:01:42.563431  290166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/249516.pem
	I0407 14:01:42.600854  290166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 13:03 /usr/share/ca-certificates/249516.pem
	I0407 14:01:42.600939  290166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/249516.pem
	I0407 14:01:42.640715  290166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/249516.pem /etc/ssl/certs/51391683.0"
	I0407 14:01:42.732760  290166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2495162.pem && ln -fs /usr/share/ca-certificates/2495162.pem /etc/ssl/certs/2495162.pem"
	I0407 14:01:42.833292  290166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2495162.pem
	I0407 14:01:42.875044  290166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 13:03 /usr/share/ca-certificates/2495162.pem
	I0407 14:01:42.875141  290166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2495162.pem
	I0407 14:01:42.936027  290166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2495162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 14:01:43.007478  290166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 14:01:43.048041  290166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:01:43.060840  290166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:01:43.060908  290166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:01:43.078337  290166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 14:01:43.110491  290166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 14:01:43.128584  290166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 14:01:43.159334  290166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 14:01:43.190397  290166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 14:01:43.233608  290166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 14:01:43.261080  290166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 14:01:43.283328  290166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 14:01:43.289635  290166 kubeadm.go:392] StartCluster: {Name:pause-440331 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-440331 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.76 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security
-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:01:43.289780  290166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 14:01:43.289839  290166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 14:01:43.369826  290166 cri.go:89] found id: "0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9"
	I0407 14:01:43.369890  290166 cri.go:89] found id: "6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240"
	I0407 14:01:43.369896  290166 cri.go:89] found id: "a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0"
	I0407 14:01:43.369901  290166 cri.go:89] found id: "c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a"
	I0407 14:01:43.369905  290166 cri.go:89] found id: "f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b"
	I0407 14:01:43.369910  290166 cri.go:89] found id: "176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750"
	I0407 14:01:43.369913  290166 cri.go:89] found id: "544b31bc057687685fecaddbd5638547f3a537376146f8f143230e70285b50ff"
	I0407 14:01:43.369918  290166 cri.go:89] found id: "d7665d063cf00f65f720b1cfc81c56280670bc8f16568e8e8b72398ca6d2bddd"
	I0407 14:01:43.369923  290166 cri.go:89] found id: "fe11750f0d883460637b0590845196f37b9472187050b5cf7e9d9e8c62902064"
	I0407 14:01:43.369933  290166 cri.go:89] found id: "907ea441d33ea9bd60464c5c56a3078dbe7d1aef6b5744ed0f7d31481613b0ac"
	I0407 14:01:43.369937  290166 cri.go:89] found id: ""
	I0407 14:01:43.369995  290166 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-440331 -n pause-440331
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-440331 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-440331 logs -n 25: (1.525851994s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-812476 sudo           | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-812476                | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	| start   | -p NoKubernetes-812476                | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-360931             | stopped-upgrade-360931    | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	| start   | -p cert-expiration-837665             | cert-expiration-837665    | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:59 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-017658             | running-upgrade-017658    | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	| start   | -p force-systemd-flag-939490          | force-systemd-flag-939490 | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 14:00 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-812476 sudo           | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-812476                | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	| start   | -p cert-options-574980                | cert-options-574980       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 14:00 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-939490 ssh cat     | force-systemd-flag-939490 | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-939490          | force-systemd-flag-939490 | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	| start   | -p pause-440331 --memory=2048         | pause-440331              | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:01 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-574980 ssh               | cert-options-574980       | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-574980 -- sudo        | cert-options-574980       | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-574980                | cert-options-574980       | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	| start   | -p auto-471753 --memory=3072          | auto-471753               | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:01 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:01 UTC |
	| start   | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:01 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-440331                       | pause-440331              | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:02 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:02 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-471753 pgrep -a               | auto-471753               | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:01 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:02 UTC | 07 Apr 25 14:02 UTC |
	| start   | -p flannel-471753                     | flannel-471753            | jenkins | v1.35.0 | 07 Apr 25 14:02 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 14:02:05
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 14:02:05.966680  290738 out.go:345] Setting OutFile to fd 1 ...
	I0407 14:02:05.967250  290738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:02:05.967304  290738 out.go:358] Setting ErrFile to fd 2...
	I0407 14:02:05.967322  290738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:02:05.967763  290738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 14:02:05.968729  290738 out.go:352] Setting JSON to false
	I0407 14:02:05.969621  290738 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":20673,"bootTime":1744013853,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 14:02:05.969729  290738 start.go:139] virtualization: kvm guest
	I0407 14:02:05.971498  290738 out.go:177] * [flannel-471753] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 14:02:05.973187  290738 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 14:02:05.973183  290738 notify.go:220] Checking for updates...
	I0407 14:02:05.975495  290738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 14:02:05.976721  290738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:02:05.977990  290738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:02:05.979157  290738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 14:02:05.980335  290738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 14:02:05.981831  290738 config.go:182] Loaded profile config "auto-471753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:02:05.981914  290738 config.go:182] Loaded profile config "cert-expiration-837665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:02:05.982019  290738 config.go:182] Loaded profile config "pause-440331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:02:05.982142  290738 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 14:02:06.021005  290738 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 14:02:06.022287  290738 start.go:297] selected driver: kvm2
	I0407 14:02:06.022310  290738 start.go:901] validating driver "kvm2" against <nil>
	I0407 14:02:06.022328  290738 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 14:02:06.023363  290738 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:02:06.023483  290738 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 14:02:06.040778  290738 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 14:02:06.040825  290738 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 14:02:06.041066  290738 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 14:02:06.041105  290738 cni.go:84] Creating CNI manager for "flannel"
	I0407 14:02:06.041111  290738 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0407 14:02:06.041169  290738 start.go:340] cluster config:
	{Name:flannel-471753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-471753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:02:06.041276  290738 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:02:06.043262  290738 out.go:177] * Starting "flannel-471753" primary control-plane node in "flannel-471753" cluster
	I0407 14:02:06.044600  290738 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 14:02:06.044661  290738 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 14:02:06.044673  290738 cache.go:56] Caching tarball of preloaded images
	I0407 14:02:06.044767  290738 preload.go:172] Found /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 14:02:06.044783  290738 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 14:02:06.044884  290738 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/config.json ...
	I0407 14:02:06.044906  290738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/config.json: {Name:mk9677baab9e7158ddf62f9b110ec8fdecc281c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:02:06.045106  290738 start.go:360] acquireMachinesLock for flannel-471753: {Name:mkbc0d9211b04d7c322a45485d144adcd6ee59fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:02:06.045151  290738 start.go:364] duration metric: took 26.466µs to acquireMachinesLock for "flannel-471753"
	I0407 14:02:06.045181  290738 start.go:93] Provisioning new machine with config: &{Name:flannel-471753 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterN
ame:flannel-471753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 14:02:06.045240  290738 start.go:125] createHost starting for "" (driver="kvm2")
	I0407 14:02:03.850459  290166 pod_ready.go:103] pod "etcd-pause-440331" in "kube-system" namespace has status "Ready":"False"
	I0407 14:02:06.348195  290166 pod_ready.go:103] pod "etcd-pause-440331" in "kube-system" namespace has status "Ready":"False"
	I0407 14:02:06.047432  290738 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0407 14:02:06.047644  290738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:02:06.047704  290738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:02:06.063484  290738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44313
	I0407 14:02:06.064061  290738 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:02:06.064676  290738 main.go:141] libmachine: Using API Version  1
	I0407 14:02:06.064706  290738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:02:06.065050  290738 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:02:06.065220  290738 main.go:141] libmachine: (flannel-471753) Calling .GetMachineName
	I0407 14:02:06.065385  290738 main.go:141] libmachine: (flannel-471753) Calling .DriverName
	I0407 14:02:06.065577  290738 start.go:159] libmachine.API.Create for "flannel-471753" (driver="kvm2")
	I0407 14:02:06.065611  290738 client.go:168] LocalClient.Create starting
	I0407 14:02:06.065647  290738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem
	I0407 14:02:06.065685  290738 main.go:141] libmachine: Decoding PEM data...
	I0407 14:02:06.065706  290738 main.go:141] libmachine: Parsing certificate...
	I0407 14:02:06.065794  290738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem
	I0407 14:02:06.065826  290738 main.go:141] libmachine: Decoding PEM data...
	I0407 14:02:06.065844  290738 main.go:141] libmachine: Parsing certificate...
	I0407 14:02:06.065870  290738 main.go:141] libmachine: Running pre-create checks...
	I0407 14:02:06.065886  290738 main.go:141] libmachine: (flannel-471753) Calling .PreCreateCheck
	I0407 14:02:06.066228  290738 main.go:141] libmachine: (flannel-471753) Calling .GetConfigRaw
	I0407 14:02:06.066610  290738 main.go:141] libmachine: Creating machine...
	I0407 14:02:06.066625  290738 main.go:141] libmachine: (flannel-471753) Calling .Create
	I0407 14:02:06.066791  290738 main.go:141] libmachine: (flannel-471753) creating KVM machine...
	I0407 14:02:06.066806  290738 main.go:141] libmachine: (flannel-471753) creating network...
	I0407 14:02:06.067947  290738 main.go:141] libmachine: (flannel-471753) DBG | found existing default KVM network
	I0407 14:02:06.068866  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:06.068717  290761 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c4:e1:02} reservation:<nil>}
	I0407 14:02:06.069801  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:06.069710  290761 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000209a50}
	I0407 14:02:06.069829  290738 main.go:141] libmachine: (flannel-471753) DBG | created network xml: 
	I0407 14:02:06.069845  290738 main.go:141] libmachine: (flannel-471753) DBG | <network>
	I0407 14:02:06.069861  290738 main.go:141] libmachine: (flannel-471753) DBG |   <name>mk-flannel-471753</name>
	I0407 14:02:06.069881  290738 main.go:141] libmachine: (flannel-471753) DBG |   <dns enable='no'/>
	I0407 14:02:06.069895  290738 main.go:141] libmachine: (flannel-471753) DBG |   
	I0407 14:02:06.069904  290738 main.go:141] libmachine: (flannel-471753) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0407 14:02:06.069921  290738 main.go:141] libmachine: (flannel-471753) DBG |     <dhcp>
	I0407 14:02:06.069928  290738 main.go:141] libmachine: (flannel-471753) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0407 14:02:06.069934  290738 main.go:141] libmachine: (flannel-471753) DBG |     </dhcp>
	I0407 14:02:06.069943  290738 main.go:141] libmachine: (flannel-471753) DBG |   </ip>
	I0407 14:02:06.069947  290738 main.go:141] libmachine: (flannel-471753) DBG |   
	I0407 14:02:06.069952  290738 main.go:141] libmachine: (flannel-471753) DBG | </network>
	I0407 14:02:06.069955  290738 main.go:141] libmachine: (flannel-471753) DBG | 
	I0407 14:02:06.075366  290738 main.go:141] libmachine: (flannel-471753) DBG | trying to create private KVM network mk-flannel-471753 192.168.50.0/24...
	I0407 14:02:06.151834  290738 main.go:141] libmachine: (flannel-471753) DBG | private KVM network mk-flannel-471753 192.168.50.0/24 created
	I0407 14:02:06.151868  290738 main.go:141] libmachine: (flannel-471753) setting up store path in /home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753 ...
	I0407 14:02:06.151889  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:06.151809  290761 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:02:06.151908  290738 main.go:141] libmachine: (flannel-471753) building disk image from file:///home/jenkins/minikube-integration/20598-242355/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 14:02:06.152007  290738 main.go:141] libmachine: (flannel-471753) Downloading /home/jenkins/minikube-integration/20598-242355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20598-242355/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 14:02:06.433560  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:06.433424  290761 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753/id_rsa...
	I0407 14:02:06.834559  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:06.834408  290761 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753/flannel-471753.rawdisk...
	I0407 14:02:06.834592  290738 main.go:141] libmachine: (flannel-471753) DBG | Writing magic tar header
	I0407 14:02:06.834611  290738 main.go:141] libmachine: (flannel-471753) DBG | Writing SSH key tar header
	I0407 14:02:06.834624  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:06.834535  290761 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753 ...
	I0407 14:02:06.834637  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753
	I0407 14:02:06.834648  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube/machines
	I0407 14:02:06.834660  290738 main.go:141] libmachine: (flannel-471753) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753 (perms=drwx------)
	I0407 14:02:06.834669  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:02:06.834682  290738 main.go:141] libmachine: (flannel-471753) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube/machines (perms=drwxr-xr-x)
	I0407 14:02:06.834701  290738 main.go:141] libmachine: (flannel-471753) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube (perms=drwxr-xr-x)
	I0407 14:02:06.834714  290738 main.go:141] libmachine: (flannel-471753) setting executable bit set on /home/jenkins/minikube-integration/20598-242355 (perms=drwxrwxr-x)
	I0407 14:02:06.834724  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355
	I0407 14:02:06.834738  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0407 14:02:06.834751  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home/jenkins
	I0407 14:02:06.834760  290738 main.go:141] libmachine: (flannel-471753) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0407 14:02:06.834774  290738 main.go:141] libmachine: (flannel-471753) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0407 14:02:06.834785  290738 main.go:141] libmachine: (flannel-471753) creating domain...
	I0407 14:02:06.834822  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home
	I0407 14:02:06.834844  290738 main.go:141] libmachine: (flannel-471753) DBG | skipping /home - not owner
	I0407 14:02:06.835983  290738 main.go:141] libmachine: (flannel-471753) define libvirt domain using xml: 
	I0407 14:02:06.836012  290738 main.go:141] libmachine: (flannel-471753) <domain type='kvm'>
	I0407 14:02:06.836022  290738 main.go:141] libmachine: (flannel-471753)   <name>flannel-471753</name>
	I0407 14:02:06.836034  290738 main.go:141] libmachine: (flannel-471753)   <memory unit='MiB'>3072</memory>
	I0407 14:02:06.836043  290738 main.go:141] libmachine: (flannel-471753)   <vcpu>2</vcpu>
	I0407 14:02:06.836049  290738 main.go:141] libmachine: (flannel-471753)   <features>
	I0407 14:02:06.836056  290738 main.go:141] libmachine: (flannel-471753)     <acpi/>
	I0407 14:02:06.836064  290738 main.go:141] libmachine: (flannel-471753)     <apic/>
	I0407 14:02:06.836080  290738 main.go:141] libmachine: (flannel-471753)     <pae/>
	I0407 14:02:06.836086  290738 main.go:141] libmachine: (flannel-471753)     
	I0407 14:02:06.836093  290738 main.go:141] libmachine: (flannel-471753)   </features>
	I0407 14:02:06.836100  290738 main.go:141] libmachine: (flannel-471753)   <cpu mode='host-passthrough'>
	I0407 14:02:06.836139  290738 main.go:141] libmachine: (flannel-471753)   
	I0407 14:02:06.836168  290738 main.go:141] libmachine: (flannel-471753)   </cpu>
	I0407 14:02:06.836181  290738 main.go:141] libmachine: (flannel-471753)   <os>
	I0407 14:02:06.836189  290738 main.go:141] libmachine: (flannel-471753)     <type>hvm</type>
	I0407 14:02:06.836199  290738 main.go:141] libmachine: (flannel-471753)     <boot dev='cdrom'/>
	I0407 14:02:06.836214  290738 main.go:141] libmachine: (flannel-471753)     <boot dev='hd'/>
	I0407 14:02:06.836226  290738 main.go:141] libmachine: (flannel-471753)     <bootmenu enable='no'/>
	I0407 14:02:06.836235  290738 main.go:141] libmachine: (flannel-471753)   </os>
	I0407 14:02:06.836245  290738 main.go:141] libmachine: (flannel-471753)   <devices>
	I0407 14:02:06.836256  290738 main.go:141] libmachine: (flannel-471753)     <disk type='file' device='cdrom'>
	I0407 14:02:06.836271  290738 main.go:141] libmachine: (flannel-471753)       <source file='/home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753/boot2docker.iso'/>
	I0407 14:02:06.836286  290738 main.go:141] libmachine: (flannel-471753)       <target dev='hdc' bus='scsi'/>
	I0407 14:02:06.836297  290738 main.go:141] libmachine: (flannel-471753)       <readonly/>
	I0407 14:02:06.836319  290738 main.go:141] libmachine: (flannel-471753)     </disk>
	I0407 14:02:06.836334  290738 main.go:141] libmachine: (flannel-471753)     <disk type='file' device='disk'>
	I0407 14:02:06.836351  290738 main.go:141] libmachine: (flannel-471753)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0407 14:02:06.836367  290738 main.go:141] libmachine: (flannel-471753)       <source file='/home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753/flannel-471753.rawdisk'/>
	I0407 14:02:06.836379  290738 main.go:141] libmachine: (flannel-471753)       <target dev='hda' bus='virtio'/>
	I0407 14:02:06.836434  290738 main.go:141] libmachine: (flannel-471753)     </disk>
	I0407 14:02:06.836460  290738 main.go:141] libmachine: (flannel-471753)     <interface type='network'>
	I0407 14:02:06.836488  290738 main.go:141] libmachine: (flannel-471753)       <source network='mk-flannel-471753'/>
	I0407 14:02:06.836514  290738 main.go:141] libmachine: (flannel-471753)       <model type='virtio'/>
	I0407 14:02:06.836525  290738 main.go:141] libmachine: (flannel-471753)     </interface>
	I0407 14:02:06.836532  290738 main.go:141] libmachine: (flannel-471753)     <interface type='network'>
	I0407 14:02:06.836543  290738 main.go:141] libmachine: (flannel-471753)       <source network='default'/>
	I0407 14:02:06.836547  290738 main.go:141] libmachine: (flannel-471753)       <model type='virtio'/>
	I0407 14:02:06.836557  290738 main.go:141] libmachine: (flannel-471753)     </interface>
	I0407 14:02:06.836578  290738 main.go:141] libmachine: (flannel-471753)     <serial type='pty'>
	I0407 14:02:06.836597  290738 main.go:141] libmachine: (flannel-471753)       <target port='0'/>
	I0407 14:02:06.836614  290738 main.go:141] libmachine: (flannel-471753)     </serial>
	I0407 14:02:06.836625  290738 main.go:141] libmachine: (flannel-471753)     <console type='pty'>
	I0407 14:02:06.836635  290738 main.go:141] libmachine: (flannel-471753)       <target type='serial' port='0'/>
	I0407 14:02:06.836643  290738 main.go:141] libmachine: (flannel-471753)     </console>
	I0407 14:02:06.836653  290738 main.go:141] libmachine: (flannel-471753)     <rng model='virtio'>
	I0407 14:02:06.836660  290738 main.go:141] libmachine: (flannel-471753)       <backend model='random'>/dev/random</backend>
	I0407 14:02:06.836667  290738 main.go:141] libmachine: (flannel-471753)     </rng>
	I0407 14:02:06.836675  290738 main.go:141] libmachine: (flannel-471753)     
	I0407 14:02:06.836688  290738 main.go:141] libmachine: (flannel-471753)     
	I0407 14:02:06.836710  290738 main.go:141] libmachine: (flannel-471753)   </devices>
	I0407 14:02:06.836735  290738 main.go:141] libmachine: (flannel-471753) </domain>
	I0407 14:02:06.836750  290738 main.go:141] libmachine: (flannel-471753) 
	I0407 14:02:06.841125  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:a0:0b:86 in network default
	I0407 14:02:06.842114  290738 main.go:141] libmachine: (flannel-471753) starting domain...
	I0407 14:02:06.842138  290738 main.go:141] libmachine: (flannel-471753) ensuring networks are active...
	I0407 14:02:06.842148  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:06.843181  290738 main.go:141] libmachine: (flannel-471753) Ensuring network default is active
	I0407 14:02:06.843500  290738 main.go:141] libmachine: (flannel-471753) Ensuring network mk-flannel-471753 is active
	I0407 14:02:06.844007  290738 main.go:141] libmachine: (flannel-471753) getting domain XML...
	I0407 14:02:06.844821  290738 main.go:141] libmachine: (flannel-471753) creating domain...
	I0407 14:02:08.117673  290738 main.go:141] libmachine: (flannel-471753) waiting for IP...
	I0407 14:02:08.118600  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:08.119043  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:08.119094  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:08.119043  290761 retry.go:31] will retry after 267.143316ms: waiting for domain to come up
	I0407 14:02:08.387368  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:08.387985  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:08.388010  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:08.387951  290761 retry.go:31] will retry after 288.53872ms: waiting for domain to come up
	I0407 14:02:08.678544  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:08.679084  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:08.679111  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:08.679036  290761 retry.go:31] will retry after 487.196115ms: waiting for domain to come up
	I0407 14:02:09.167804  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:09.168374  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:09.168396  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:09.168354  290761 retry.go:31] will retry after 383.713176ms: waiting for domain to come up
	I0407 14:02:09.553845  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:09.554421  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:09.554485  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:09.554383  290761 retry.go:31] will retry after 507.623444ms: waiting for domain to come up
	I0407 14:02:10.064065  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:10.064628  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:10.064673  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:10.064590  290761 retry.go:31] will retry after 755.704153ms: waiting for domain to come up
	I0407 14:02:10.821542  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:10.822041  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:10.822099  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:10.822011  290761 retry.go:31] will retry after 932.523671ms: waiting for domain to come up
	I0407 14:02:08.350888  290166 pod_ready.go:103] pod "etcd-pause-440331" in "kube-system" namespace has status "Ready":"False"
	I0407 14:02:09.850104  290166 pod_ready.go:93] pod "etcd-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:09.850128  290166 pod_ready.go:82] duration metric: took 8.007036235s for pod "etcd-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:09.850137  290166 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:11.856832  290166 pod_ready.go:93] pod "kube-apiserver-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:11.856860  290166 pod_ready.go:82] duration metric: took 2.006716587s for pod "kube-apiserver-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:11.856872  290166 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:13.862872  290166 pod_ready.go:93] pod "kube-controller-manager-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:13.862905  290166 pod_ready.go:82] duration metric: took 2.006022919s for pod "kube-controller-manager-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:13.862920  290166 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-42rwd" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:13.869050  290166 pod_ready.go:93] pod "kube-proxy-42rwd" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:13.869080  290166 pod_ready.go:82] duration metric: took 6.151389ms for pod "kube-proxy-42rwd" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:13.869095  290166 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:13.873451  290166 pod_ready.go:93] pod "kube-scheduler-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:13.873468  290166 pod_ready.go:82] duration metric: took 4.36558ms for pod "kube-scheduler-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:13.873474  290166 pod_ready.go:39] duration metric: took 12.039057001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:02:13.873493  290166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 14:02:13.893988  290166 ops.go:34] apiserver oom_adj: -16
	I0407 14:02:13.894018  290166 kubeadm.go:597] duration metric: took 30.443283794s to restartPrimaryControlPlane
	I0407 14:02:13.894031  290166 kubeadm.go:394] duration metric: took 30.604408284s to StartCluster
	I0407 14:02:13.894055  290166 settings.go:142] acquiring lock: {Name:mk4f0a46db7c57f47f856bd845390df879e08200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:02:13.894150  290166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:02:13.895242  290166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/kubeconfig: {Name:mkef4208e7f217ec5ec7c15cd00232eac7047b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:02:13.895546  290166 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.76 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 14:02:13.895847  290166 config.go:182] Loaded profile config "pause-440331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:02:13.895802  290166 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 14:02:13.897142  290166 out.go:177] * Enabled addons: 
	I0407 14:02:13.897156  290166 out.go:177] * Verifying Kubernetes components...
	I0407 14:02:11.756758  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:11.757284  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:11.757314  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:11.757252  290761 retry.go:31] will retry after 1.037501795s: waiting for domain to come up
	I0407 14:02:12.796052  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:12.796604  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:12.796629  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:12.796576  290761 retry.go:31] will retry after 1.411367229s: waiting for domain to come up
	I0407 14:02:14.209694  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:14.210272  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:14.210300  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:14.210239  290761 retry.go:31] will retry after 2.109587737s: waiting for domain to come up
	I0407 14:02:13.898327  290166 addons.go:514] duration metric: took 2.532305ms for enable addons: enabled=[]
	I0407 14:02:13.898372  290166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:02:14.085076  290166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:02:14.104212  290166 node_ready.go:35] waiting up to 6m0s for node "pause-440331" to be "Ready" ...
	I0407 14:02:14.107358  290166 node_ready.go:49] node "pause-440331" has status "Ready":"True"
	I0407 14:02:14.107380  290166 node_ready.go:38] duration metric: took 3.137183ms for node "pause-440331" to be "Ready" ...
	I0407 14:02:14.107389  290166 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:02:14.110474  290166 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-mtscb" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:14.115245  290166 pod_ready.go:93] pod "coredns-668d6bf9bc-mtscb" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:14.115266  290166 pod_ready.go:82] duration metric: took 4.767143ms for pod "coredns-668d6bf9bc-mtscb" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:14.115277  290166 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:14.261170  290166 pod_ready.go:93] pod "etcd-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:14.261202  290166 pod_ready.go:82] duration metric: took 145.917612ms for pod "etcd-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:14.261215  290166 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:14.660215  290166 pod_ready.go:93] pod "kube-apiserver-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:14.660248  290166 pod_ready.go:82] duration metric: took 399.024123ms for pod "kube-apiserver-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:14.660264  290166 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:15.060099  290166 pod_ready.go:93] pod "kube-controller-manager-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:15.060132  290166 pod_ready.go:82] duration metric: took 399.858ms for pod "kube-controller-manager-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:15.060150  290166 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-42rwd" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:15.461977  290166 pod_ready.go:93] pod "kube-proxy-42rwd" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:15.462008  290166 pod_ready.go:82] duration metric: took 401.851187ms for pod "kube-proxy-42rwd" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:15.462019  290166 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:15.861779  290166 pod_ready.go:93] pod "kube-scheduler-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:15.861808  290166 pod_ready.go:82] duration metric: took 399.781438ms for pod "kube-scheduler-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:15.861816  290166 pod_ready.go:39] duration metric: took 1.75441794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:02:15.861832  290166 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:02:15.861889  290166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:02:15.877325  290166 api_server.go:72] duration metric: took 1.981749278s to wait for apiserver process to appear ...
	I0407 14:02:15.877351  290166 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:02:15.877368  290166 api_server.go:253] Checking apiserver healthz at https://192.168.61.76:8443/healthz ...
	I0407 14:02:15.883110  290166 api_server.go:279] https://192.168.61.76:8443/healthz returned 200:
	ok
	I0407 14:02:15.884313  290166 api_server.go:141] control plane version: v1.32.2
	I0407 14:02:15.884341  290166 api_server.go:131] duration metric: took 6.982088ms to wait for apiserver health ...
	I0407 14:02:15.884352  290166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:02:16.062244  290166 system_pods.go:59] 6 kube-system pods found
	I0407 14:02:16.062281  290166 system_pods.go:61] "coredns-668d6bf9bc-mtscb" [c192111e-3f24-4700-bf04-3a82f48faa32] Running
	I0407 14:02:16.062288  290166 system_pods.go:61] "etcd-pause-440331" [d84f0b9d-be6b-40a8-a298-311e44f20bc5] Running
	I0407 14:02:16.062294  290166 system_pods.go:61] "kube-apiserver-pause-440331" [3dd8017f-c6a3-42f0-a77c-4088c6c70332] Running
	I0407 14:02:16.062299  290166 system_pods.go:61] "kube-controller-manager-pause-440331" [371a0a83-fbb0-4128-b81f-620e2b82df28] Running
	I0407 14:02:16.062305  290166 system_pods.go:61] "kube-proxy-42rwd" [e593e76c-63b2-4de2-9d53-98aae3fa045f] Running
	I0407 14:02:16.062311  290166 system_pods.go:61] "kube-scheduler-pause-440331" [249a64fd-014a-40c3-b85d-6889b1c740ee] Running
	I0407 14:02:16.062319  290166 system_pods.go:74] duration metric: took 177.959241ms to wait for pod list to return data ...
	I0407 14:02:16.062335  290166 default_sa.go:34] waiting for default service account to be created ...
	I0407 14:02:16.260844  290166 default_sa.go:45] found service account: "default"
	I0407 14:02:16.260885  290166 default_sa.go:55] duration metric: took 198.531104ms for default service account to be created ...
	I0407 14:02:16.260898  290166 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 14:02:16.462172  290166 system_pods.go:86] 6 kube-system pods found
	I0407 14:02:16.462213  290166 system_pods.go:89] "coredns-668d6bf9bc-mtscb" [c192111e-3f24-4700-bf04-3a82f48faa32] Running
	I0407 14:02:16.462223  290166 system_pods.go:89] "etcd-pause-440331" [d84f0b9d-be6b-40a8-a298-311e44f20bc5] Running
	I0407 14:02:16.462230  290166 system_pods.go:89] "kube-apiserver-pause-440331" [3dd8017f-c6a3-42f0-a77c-4088c6c70332] Running
	I0407 14:02:16.462237  290166 system_pods.go:89] "kube-controller-manager-pause-440331" [371a0a83-fbb0-4128-b81f-620e2b82df28] Running
	I0407 14:02:16.462244  290166 system_pods.go:89] "kube-proxy-42rwd" [e593e76c-63b2-4de2-9d53-98aae3fa045f] Running
	I0407 14:02:16.462249  290166 system_pods.go:89] "kube-scheduler-pause-440331" [249a64fd-014a-40c3-b85d-6889b1c740ee] Running
	I0407 14:02:16.462260  290166 system_pods.go:126] duration metric: took 201.35396ms to wait for k8s-apps to be running ...
	I0407 14:02:16.462274  290166 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 14:02:16.462334  290166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:02:16.483585  290166 system_svc.go:56] duration metric: took 21.299882ms WaitForService to wait for kubelet
	I0407 14:02:16.483623  290166 kubeadm.go:582] duration metric: took 2.588051364s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 14:02:16.483654  290166 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:02:16.661443  290166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:02:16.661469  290166 node_conditions.go:123] node cpu capacity is 2
	I0407 14:02:16.661481  290166 node_conditions.go:105] duration metric: took 177.821391ms to run NodePressure ...
	I0407 14:02:16.661493  290166 start.go:241] waiting for startup goroutines ...
	I0407 14:02:16.661499  290166 start.go:246] waiting for cluster config update ...
	I0407 14:02:16.661506  290166 start.go:255] writing updated cluster config ...
	I0407 14:02:16.661822  290166 ssh_runner.go:195] Run: rm -f paused
	I0407 14:02:16.716691  290166 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 14:02:16.718841  290166 out.go:177] * Done! kubectl is now configured to use "pause-440331" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.465413757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034537465368597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2caa6d73-4688-4f88-a7dd-c1635f10f3c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.466218657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec639500-6190-4e0b-8f53-c122e8e33e1e name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.466302043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec639500-6190-4e0b-8f53-c122e8e33e1e name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.466595800Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7af811745df0f25cfc520cfb43cb4bae41af0065f64d3a9c00213734e289f59,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744034516472865101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ffa8a6583f46b23fe9546404e941c62137cfa35b7cea3eba15338ae23616ab,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744034516491484957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c230be78d02592bf2150f8706a80b3aac7c8456b2e05620b54dc512d85a0bff5,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744034516447455804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c447fdbe487f6abb45b934304ee59179a63ba6aa692fcaa78fedad9ac77dbeeb,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744034516459744417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3237ab3c805db26cb3d5a85e628040ded3a1d2fe1dfe7e1c559296b1adbcdb09,PodSandboxId:e714b418e0969e9e15904bbbd292f8f32a81f58fa495d002d017e2b46c85f048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744034502716497813,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3f5089a63418f8a0e5836b21c290c812ff7a6d0ea8de6e324986224e08c15c,PodSandboxId:215ae343dceba4e9c6ddb3aada2c57d52107422eae2d7842eeebf9e7cc7d697d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744034503601787996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744034502787394272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744034502596390720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.
container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744034502572872877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5
aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744034502497747021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b,PodSandboxId:cdb797faa7c3b64f1a21f0e3df9fc164377fd2504bda83274c695191258115e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744034458038330116,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750,PodSandboxId:8404038eb30c6d73efd2805f05a1938e4c28560a936bbfbbf1d25e044060c175,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744034457536791157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec639500-6190-4e0b-8f53-c122e8e33e1e name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.512469222Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35acb28b-aba7-4966-814f-c4aac3314632 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.512592015Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35acb28b-aba7-4966-814f-c4aac3314632 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.513671073Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8254dcf7-9a1c-4021-9922-14ded7bc0b05 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.514630339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034537514597794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8254dcf7-9a1c-4021-9922-14ded7bc0b05 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.515409182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5730f6a5-cb29-4f53-920f-cc4a0341fd71 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.515486526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5730f6a5-cb29-4f53-920f-cc4a0341fd71 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.515806573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7af811745df0f25cfc520cfb43cb4bae41af0065f64d3a9c00213734e289f59,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744034516472865101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ffa8a6583f46b23fe9546404e941c62137cfa35b7cea3eba15338ae23616ab,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744034516491484957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c230be78d02592bf2150f8706a80b3aac7c8456b2e05620b54dc512d85a0bff5,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744034516447455804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c447fdbe487f6abb45b934304ee59179a63ba6aa692fcaa78fedad9ac77dbeeb,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744034516459744417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3237ab3c805db26cb3d5a85e628040ded3a1d2fe1dfe7e1c559296b1adbcdb09,PodSandboxId:e714b418e0969e9e15904bbbd292f8f32a81f58fa495d002d017e2b46c85f048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744034502716497813,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3f5089a63418f8a0e5836b21c290c812ff7a6d0ea8de6e324986224e08c15c,PodSandboxId:215ae343dceba4e9c6ddb3aada2c57d52107422eae2d7842eeebf9e7cc7d697d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744034503601787996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744034502787394272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744034502596390720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.
container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744034502572872877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5
aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744034502497747021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b,PodSandboxId:cdb797faa7c3b64f1a21f0e3df9fc164377fd2504bda83274c695191258115e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744034458038330116,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750,PodSandboxId:8404038eb30c6d73efd2805f05a1938e4c28560a936bbfbbf1d25e044060c175,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744034457536791157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5730f6a5-cb29-4f53-920f-cc4a0341fd71 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.565235637Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b9f5ab2-4c7a-44c4-a4f5-e1c643ce8224 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.565326797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b9f5ab2-4c7a-44c4-a4f5-e1c643ce8224 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.567214035Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=503e1550-17f4-4705-818f-c51c12c75433 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.567824205Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034537567784623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=503e1550-17f4-4705-818f-c51c12c75433 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.568617920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8819aed3-426f-40aa-9887-3401b9a31ff8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.568702511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8819aed3-426f-40aa-9887-3401b9a31ff8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.569178123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7af811745df0f25cfc520cfb43cb4bae41af0065f64d3a9c00213734e289f59,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744034516472865101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ffa8a6583f46b23fe9546404e941c62137cfa35b7cea3eba15338ae23616ab,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744034516491484957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c230be78d02592bf2150f8706a80b3aac7c8456b2e05620b54dc512d85a0bff5,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744034516447455804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c447fdbe487f6abb45b934304ee59179a63ba6aa692fcaa78fedad9ac77dbeeb,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744034516459744417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3237ab3c805db26cb3d5a85e628040ded3a1d2fe1dfe7e1c559296b1adbcdb09,PodSandboxId:e714b418e0969e9e15904bbbd292f8f32a81f58fa495d002d017e2b46c85f048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744034502716497813,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3f5089a63418f8a0e5836b21c290c812ff7a6d0ea8de6e324986224e08c15c,PodSandboxId:215ae343dceba4e9c6ddb3aada2c57d52107422eae2d7842eeebf9e7cc7d697d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744034503601787996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744034502787394272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744034502596390720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.
container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744034502572872877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5
aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744034502497747021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b,PodSandboxId:cdb797faa7c3b64f1a21f0e3df9fc164377fd2504bda83274c695191258115e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744034458038330116,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750,PodSandboxId:8404038eb30c6d73efd2805f05a1938e4c28560a936bbfbbf1d25e044060c175,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744034457536791157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8819aed3-426f-40aa-9887-3401b9a31ff8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.615503602Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd830d38-2ada-47c3-bbe9-3f48445dfd77 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.615574382Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd830d38-2ada-47c3-bbe9-3f48445dfd77 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.617142650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1baee22a-5892-46c1-bc57-f6caf488920f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.617651701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034537617629052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1baee22a-5892-46c1-bc57-f6caf488920f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.618436162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e848a9c1-d23f-4228-9552-72dd1343941a name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.618489224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e848a9c1-d23f-4228-9552-72dd1343941a name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:17 pause-440331 crio[2406]: time="2025-04-07 14:02:17.619095616Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7af811745df0f25cfc520cfb43cb4bae41af0065f64d3a9c00213734e289f59,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744034516472865101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ffa8a6583f46b23fe9546404e941c62137cfa35b7cea3eba15338ae23616ab,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744034516491484957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c230be78d02592bf2150f8706a80b3aac7c8456b2e05620b54dc512d85a0bff5,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744034516447455804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c447fdbe487f6abb45b934304ee59179a63ba6aa692fcaa78fedad9ac77dbeeb,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744034516459744417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3237ab3c805db26cb3d5a85e628040ded3a1d2fe1dfe7e1c559296b1adbcdb09,PodSandboxId:e714b418e0969e9e15904bbbd292f8f32a81f58fa495d002d017e2b46c85f048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744034502716497813,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3f5089a63418f8a0e5836b21c290c812ff7a6d0ea8de6e324986224e08c15c,PodSandboxId:215ae343dceba4e9c6ddb3aada2c57d52107422eae2d7842eeebf9e7cc7d697d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744034503601787996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744034502787394272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744034502596390720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.
container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744034502572872877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5
aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744034502497747021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b,PodSandboxId:cdb797faa7c3b64f1a21f0e3df9fc164377fd2504bda83274c695191258115e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744034458038330116,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750,PodSandboxId:8404038eb30c6d73efd2805f05a1938e4c28560a936bbfbbf1d25e044060c175,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744034457536791157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e848a9c1-d23f-4228-9552-72dd1343941a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e6ffa8a6583f4       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   21 seconds ago       Running             etcd                      2                   c98f97fdc30af       etcd-pause-440331
	a7af811745df0       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   21 seconds ago       Running             kube-apiserver            2                   8b2cfd21bab18       kube-apiserver-pause-440331
	c447fdbe487f6       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   21 seconds ago       Running             kube-controller-manager   2                   4adaabdab2b47       kube-controller-manager-pause-440331
	c230be78d0259       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   21 seconds ago       Running             kube-scheduler            2                   eeefa0e3af071       kube-scheduler-pause-440331
	dc3f5089a6341       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   34 seconds ago       Running             coredns                   1                   215ae343dceba       coredns-668d6bf9bc-mtscb
	0ed17fdbbe763       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   34 seconds ago       Exited              kube-controller-manager   1                   4adaabdab2b47       kube-controller-manager-pause-440331
	3237ab3c805db       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   34 seconds ago       Running             kube-proxy                1                   e714b418e0969       kube-proxy-42rwd
	6e55d632fbe6f       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   35 seconds ago       Exited              kube-apiserver            1                   8b2cfd21bab18       kube-apiserver-pause-440331
	a50b1c7960acd       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   35 seconds ago       Exited              kube-scheduler            1                   eeefa0e3af071       kube-scheduler-pause-440331
	c5a8fa1f17c47       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   35 seconds ago       Exited              etcd                      1                   c98f97fdc30af       etcd-pause-440331
	f21fae4b13796       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   cdb797faa7c3b       coredns-668d6bf9bc-mtscb
	176ad0ba0fc50       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   About a minute ago   Exited              kube-proxy                0                   8404038eb30c6       kube-proxy-42rwd
	
	
	==> coredns [dc3f5089a63418f8a0e5836b21c290c812ff7a6d0ea8de6e324986224e08c15c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60954 - 2591 "HINFO IN 2330035401213661035.4473101970594804087. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018633422s
	
	
	==> coredns [f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[588057979]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 14:00:58.329) (total time: 30002ms):
	Trace[588057979]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (14:01:28.332)
	Trace[588057979]: [30.002532404s] [30.002532404s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1559066714]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 14:00:58.329) (total time: 30003ms):
	Trace[1559066714]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (14:01:28.332)
	Trace[1559066714]: [30.003321056s] [30.003321056s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1458740820]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 14:00:58.330) (total time: 30001ms):
	Trace[1458740820]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:01:28.332)
	Trace[1458740820]: [30.001995304s] [30.001995304s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-440331
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-440331
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=pause-440331
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T14_00_52_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 14:00:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-440331
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 14:02:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 14:02:00 +0000   Mon, 07 Apr 2025 14:00:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 14:02:00 +0000   Mon, 07 Apr 2025 14:00:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 14:02:00 +0000   Mon, 07 Apr 2025 14:00:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 14:02:00 +0000   Mon, 07 Apr 2025 14:00:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.76
	  Hostname:    pause-440331
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7e3ac1735344f759c0120ab673dd44f
	  System UUID:                a7e3ac17-3534-4f75-9c01-20ab673dd44f
	  Boot ID:                    a4dcac2e-efd4-4f5b-bbee-1eaa23c70b5c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-mtscb                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     80s
	  kube-system                 etcd-pause-440331                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         86s
	  kube-system                 kube-apiserver-pause-440331             250m (12%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-controller-manager-pause-440331    200m (10%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-proxy-42rwd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-scheduler-pause-440331             100m (5%)     0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 79s                kube-proxy       
	  Normal  Starting                 32s                kube-proxy       
	  Normal  NodeHasSufficientPID     91s (x7 over 91s)  kubelet          Node pause-440331 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node pause-440331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node pause-440331 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                85s                kubelet          Node pause-440331 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    85s                kubelet          Node pause-440331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s                kubelet          Node pause-440331 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  85s                kubelet          Node pause-440331 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           81s                node-controller  Node pause-440331 event: Registered Node pause-440331 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node pause-440331 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node pause-440331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node pause-440331 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-440331 event: Registered Node pause-440331 in Controller
	
	
	==> dmesg <==
	[  +7.689732] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.063421] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064020] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.174099] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.137307] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.300362] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.419815] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +0.057862] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.820606] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.075791] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.020258] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	[  +0.078750] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.409080] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	[  +0.138546] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 7 14:01] kauditd_printk_skb: 88 callbacks suppressed
	[ +31.237944] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	[  +0.144875] systemd-fstab-generator[2345]: Ignoring "noauto" option for root device
	[  +0.201100] systemd-fstab-generator[2359]: Ignoring "noauto" option for root device
	[  +0.167996] systemd-fstab-generator[2371]: Ignoring "noauto" option for root device
	[  +0.327292] systemd-fstab-generator[2399]: Ignoring "noauto" option for root device
	[  +0.731061] systemd-fstab-generator[2526]: Ignoring "noauto" option for root device
	[ +10.361712] kauditd_printk_skb: 196 callbacks suppressed
	[  +3.597345] systemd-fstab-generator[3375]: Ignoring "noauto" option for root device
	[Apr 7 14:02] kauditd_printk_skb: 39 callbacks suppressed
	[  +7.286959] systemd-fstab-generator[3701]: Ignoring "noauto" option for root device
	
	
	==> etcd [c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a] <==
	{"level":"info","ts":"2025-04-07T14:01:44.072075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-07T14:01:44.072117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 received MsgPreVoteResp from 9cf9907e2fa71306 at term 2"}
	{"level":"info","ts":"2025-04-07T14:01:44.072130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 became candidate at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:44.072136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 received MsgVoteResp from 9cf9907e2fa71306 at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:44.072146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 became leader at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:44.072156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9cf9907e2fa71306 elected leader 9cf9907e2fa71306 at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:44.079239Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9cf9907e2fa71306","local-member-attributes":"{Name:pause-440331 ClientURLs:[https://192.168.61.76:2379]}","request-path":"/0/members/9cf9907e2fa71306/attributes","cluster-id":"24199a2c11709dba","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T14:01:44.079333Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T14:01:44.081724Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T14:01:44.084624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T14:01:44.087430Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T14:01:44.088044Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T14:01:44.088456Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T14:01:44.088507Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T14:01:44.097732Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.76:2379"}
	{"level":"info","ts":"2025-04-07T14:01:54.052862Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-04-07T14:01:54.052990Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-440331","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.76:2380"],"advertise-client-urls":["https://192.168.61.76:2379"]}
	{"level":"warn","ts":"2025-04-07T14:01:54.053153Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-07T14:01:54.053261Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-07T14:01:54.054822Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.76:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-07T14:01:54.054874Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.76:2379: use of closed network connection"}
	{"level":"info","ts":"2025-04-07T14:01:54.054917Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9cf9907e2fa71306","current-leader-member-id":"9cf9907e2fa71306"}
	{"level":"info","ts":"2025-04-07T14:01:54.058256Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.61.76:2380"}
	{"level":"info","ts":"2025-04-07T14:01:54.058395Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.61.76:2380"}
	{"level":"info","ts":"2025-04-07T14:01:54.058406Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-440331","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.76:2380"],"advertise-client-urls":["https://192.168.61.76:2379"]}
	
	
	==> etcd [e6ffa8a6583f46b23fe9546404e941c62137cfa35b7cea3eba15338ae23616ab] <==
	{"level":"info","ts":"2025-04-07T14:01:56.857445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 switched to configuration voters=(11311230810757468934)"}
	{"level":"info","ts":"2025-04-07T14:01:56.890649Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-07T14:01:56.896017Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"24199a2c11709dba","local-member-id":"9cf9907e2fa71306","added-peer-id":"9cf9907e2fa71306","added-peer-peer-urls":["https://192.168.61.76:2380"]}
	{"level":"info","ts":"2025-04-07T14:01:56.896176Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"24199a2c11709dba","local-member-id":"9cf9907e2fa71306","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T14:01:56.896222Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T14:01:56.902311Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"9cf9907e2fa71306","initial-advertise-peer-urls":["https://192.168.61.76:2380"],"listen-peer-urls":["https://192.168.61.76:2380"],"advertise-client-urls":["https://192.168.61.76:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.76:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-07T14:01:56.902361Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-07T14:01:56.902422Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.61.76:2380"}
	{"level":"info","ts":"2025-04-07T14:01:56.902446Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.61.76:2380"}
	{"level":"info","ts":"2025-04-07T14:01:58.718798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 is starting a new election at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:58.719012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:58.719072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 received MsgPreVoteResp from 9cf9907e2fa71306 at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:58.719126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 became candidate at term 4"}
	{"level":"info","ts":"2025-04-07T14:01:58.719144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 received MsgVoteResp from 9cf9907e2fa71306 at term 4"}
	{"level":"info","ts":"2025-04-07T14:01:58.719164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 became leader at term 4"}
	{"level":"info","ts":"2025-04-07T14:01:58.719182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9cf9907e2fa71306 elected leader 9cf9907e2fa71306 at term 4"}
	{"level":"info","ts":"2025-04-07T14:01:58.724183Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9cf9907e2fa71306","local-member-attributes":"{Name:pause-440331 ClientURLs:[https://192.168.61.76:2379]}","request-path":"/0/members/9cf9907e2fa71306/attributes","cluster-id":"24199a2c11709dba","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T14:01:58.724190Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T14:01:58.724540Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T14:01:58.724575Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T14:01:58.724212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T14:01:58.725298Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T14:01:58.725322Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T14:01:58.725917Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T14:01:58.726548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.76:2379"}
	
	
	==> kernel <==
	 14:02:18 up 2 min,  0 users,  load average: 1.13, 0.52, 0.19
	Linux pause-440331 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240] <==
	I0407 14:01:45.910198       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0407 14:01:45.910359       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0407 14:01:45.910574       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0407 14:01:45.913837       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0407 14:01:45.910528       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0407 14:01:45.910535       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0407 14:01:45.913817       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0407 14:01:45.914688       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0407 14:01:45.914767       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0407 14:01:46.598283       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:46.615513       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:47.599111       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:47.614991       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:48.598708       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:48.614594       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:49.598263       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:49.614328       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:50.598571       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:50.614819       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:51.599141       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:51.614652       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:52.598478       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:52.615639       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:53.598240       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:53.615299       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [a7af811745df0f25cfc520cfb43cb4bae41af0065f64d3a9c00213734e289f59] <==
	I0407 14:02:00.093342       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0407 14:02:00.093395       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0407 14:02:00.102790       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0407 14:02:00.110702       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 14:02:00.117175       1 aggregator.go:171] initial CRD sync complete...
	I0407 14:02:00.117214       1 autoregister_controller.go:144] Starting autoregister controller
	I0407 14:02:00.117223       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0407 14:02:00.117231       1 cache.go:39] Caches are synced for autoregister controller
	I0407 14:02:00.131425       1 shared_informer.go:320] Caches are synced for configmaps
	I0407 14:02:00.131521       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0407 14:02:00.131835       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0407 14:02:00.132272       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0407 14:02:00.131850       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0407 14:02:00.150280       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0407 14:02:00.169132       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0407 14:02:00.939024       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0407 14:02:01.004405       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	W0407 14:02:01.358759       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.76]
	I0407 14:02:01.359730       1 controller.go:615] quota admission added evaluator for: endpoints
	I0407 14:02:01.368103       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 14:02:01.692386       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 14:02:01.740861       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 14:02:01.776039       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 14:02:01.788208       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0407 14:02:07.332690       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9] <==
	
	
	==> kube-controller-manager [c447fdbe487f6abb45b934304ee59179a63ba6aa692fcaa78fedad9ac77dbeeb] <==
	I0407 14:02:03.313268       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0407 14:02:03.315718       1 shared_informer.go:320] Caches are synced for TTL
	I0407 14:02:03.321292       1 shared_informer.go:320] Caches are synced for attach detach
	I0407 14:02:03.325163       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0407 14:02:03.325853       1 shared_informer.go:320] Caches are synced for daemon sets
	I0407 14:02:03.326755       1 shared_informer.go:320] Caches are synced for PV protection
	I0407 14:02:03.327164       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0407 14:02:03.327293       1 shared_informer.go:320] Caches are synced for persistent volume
	I0407 14:02:03.332661       1 shared_informer.go:320] Caches are synced for endpoint
	I0407 14:02:03.334040       1 shared_informer.go:320] Caches are synced for node
	I0407 14:02:03.334176       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0407 14:02:03.334317       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0407 14:02:03.334444       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0407 14:02:03.334587       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0407 14:02:03.334755       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-440331"
	I0407 14:02:03.336401       1 shared_informer.go:320] Caches are synced for resource quota
	I0407 14:02:03.346663       1 shared_informer.go:320] Caches are synced for HPA
	I0407 14:02:03.364127       1 shared_informer.go:320] Caches are synced for garbage collector
	I0407 14:02:03.370590       1 shared_informer.go:320] Caches are synced for garbage collector
	I0407 14:02:03.370636       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0407 14:02:03.370649       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0407 14:02:07.340496       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.308398ms"
	I0407 14:02:07.340879       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="160.358µs"
	I0407 14:02:07.364426       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="22.034232ms"
	I0407 14:02:07.364521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.026µs"
	
	
	==> kube-proxy [176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 14:00:58.262109       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 14:00:58.297841       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.76"]
	E0407 14:00:58.298159       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 14:00:58.363217       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 14:00:58.363274       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 14:00:58.363299       1 server_linux.go:170] "Using iptables Proxier"
	I0407 14:00:58.366751       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 14:00:58.367572       1 server.go:497] "Version info" version="v1.32.2"
	I0407 14:00:58.367608       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 14:00:58.373119       1 config.go:199] "Starting service config controller"
	I0407 14:00:58.374198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 14:00:58.374255       1 config.go:329] "Starting node config controller"
	I0407 14:00:58.374263       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 14:00:58.375018       1 config.go:105] "Starting endpoint slice config controller"
	I0407 14:00:58.375055       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 14:00:58.475248       1 shared_informer.go:320] Caches are synced for node config
	I0407 14:00:58.475246       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 14:00:58.475267       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [3237ab3c805db26cb3d5a85e628040ded3a1d2fe1dfe7e1c559296b1adbcdb09] <==
	E0407 14:01:45.924084       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:45.924157       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:45.924194       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	E0407 14:01:45.924347       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.61.76:8443: connect: connection refused"
	W0407 14:01:47.039782       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:47.039998       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:47.422042       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:47.422101       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:47.476136       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:47.476267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:49.056438       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:49.056537       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:49.760575       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:49.760675       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:50.395075       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:50.395177       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:53.062329       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:53.062431       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:54.884033       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:54.884130       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:56.591502       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:56.591570       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	I0407 14:02:02.423805       1 shared_informer.go:320] Caches are synced for node config
	I0407 14:02:05.922854       1 shared_informer.go:320] Caches are synced for service config
	I0407 14:02:06.623168       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0] <==
	I0407 14:01:44.571127       1 serving.go:386] Generated self-signed cert in-memory
	W0407 14:01:45.628071       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 14:01:45.628110       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 14:01:45.628120       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 14:01:45.628198       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 14:01:45.714642       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0407 14:01:45.714683       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 14:01:45.722355       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0407 14:01:45.722564       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 14:01:45.722612       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 14:01:45.723167       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0407 14:01:45.823262       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0407 14:01:53.913800       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c230be78d02592bf2150f8706a80b3aac7c8456b2e05620b54dc512d85a0bff5] <==
	I0407 14:01:57.430689       1 serving.go:386] Generated self-signed cert in-memory
	W0407 14:02:00.028514       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 14:02:00.030065       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 14:02:00.030157       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 14:02:00.030185       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 14:02:00.126041       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0407 14:02:00.126070       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 14:02:00.130887       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0407 14:02:00.131098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 14:02:00.131120       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 14:02:00.131159       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0407 14:02:00.232205       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 07 14:01:59 pause-440331 kubelet[3382]: E0407 14:01:59.099044    3382 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-440331\" not found" node="pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.099890    3382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.100385    3382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.159158    3382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: E0407 14:02:00.191116    3382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-440331\" already exists" pod="kube-system/kube-scheduler-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: E0407 14:02:00.197313    3382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-440331\" already exists" pod="kube-system/etcd-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: E0407 14:02:00.199033    3382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-440331\" already exists" pod="kube-system/kube-apiserver-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.199082    3382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.238654    3382 kubelet_node_status.go:125] "Node was previously registered" node="pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.238748    3382 kubelet_node_status.go:79] "Successfully registered node" node="pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.238771    3382 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.240269    3382 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: E0407 14:02:00.243139    3382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-440331\" already exists" pod="kube-system/kube-controller-manager-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.243267    3382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: E0407 14:02:00.258216    3382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-440331\" already exists" pod="kube-system/kube-scheduler-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.258374    3382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: E0407 14:02:00.278475    3382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-440331\" already exists" pod="kube-system/etcd-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.929465    3382 apiserver.go:52] "Watching apiserver"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.954615    3382 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.998872    3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e593e76c-63b2-4de2-9d53-98aae3fa045f-xtables-lock\") pod \"kube-proxy-42rwd\" (UID: \"e593e76c-63b2-4de2-9d53-98aae3fa045f\") " pod="kube-system/kube-proxy-42rwd"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.999042    3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e593e76c-63b2-4de2-9d53-98aae3fa045f-lib-modules\") pod \"kube-proxy-42rwd\" (UID: \"e593e76c-63b2-4de2-9d53-98aae3fa045f\") " pod="kube-system/kube-proxy-42rwd"
	Apr 07 14:02:06 pause-440331 kubelet[3382]: E0407 14:02:06.091406    3382 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034526091075576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 14:02:06 pause-440331 kubelet[3382]: E0407 14:02:06.091551    3382 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034526091075576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 14:02:16 pause-440331 kubelet[3382]: E0407 14:02:16.095319    3382 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034536093827617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 14:02:16 pause-440331 kubelet[3382]: E0407 14:02:16.095762    3382 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034536093827617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-440331 -n pause-440331
helpers_test.go:261: (dbg) Run:  kubectl --context pause-440331 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-440331 -n pause-440331
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-440331 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-440331 logs -n 25: (1.521655835s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-812476 sudo           | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-812476                | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	| start   | -p NoKubernetes-812476                | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-360931             | stopped-upgrade-360931    | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	| start   | -p cert-expiration-837665             | cert-expiration-837665    | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:59 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-017658             | running-upgrade-017658    | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	| start   | -p force-systemd-flag-939490          | force-systemd-flag-939490 | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 14:00 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-812476 sudo           | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-812476                | NoKubernetes-812476       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 13:58 UTC |
	| start   | -p cert-options-574980                | cert-options-574980       | jenkins | v1.35.0 | 07 Apr 25 13:58 UTC | 07 Apr 25 14:00 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-939490 ssh cat     | force-systemd-flag-939490 | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-939490          | force-systemd-flag-939490 | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	| start   | -p pause-440331 --memory=2048         | pause-440331              | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:01 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-574980 ssh               | cert-options-574980       | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-574980 -- sudo        | cert-options-574980       | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-574980                | cert-options-574980       | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:00 UTC |
	| start   | -p auto-471753 --memory=3072          | auto-471753               | jenkins | v1.35.0 | 07 Apr 25 14:00 UTC | 07 Apr 25 14:01 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:01 UTC |
	| start   | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:01 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-440331                       | pause-440331              | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:02 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:02 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-471753 pgrep -a               | auto-471753               | jenkins | v1.35.0 | 07 Apr 25 14:01 UTC | 07 Apr 25 14:01 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-222032          | kubernetes-upgrade-222032 | jenkins | v1.35.0 | 07 Apr 25 14:02 UTC | 07 Apr 25 14:02 UTC |
	| start   | -p flannel-471753                     | flannel-471753            | jenkins | v1.35.0 | 07 Apr 25 14:02 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 14:02:05
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 14:02:05.966680  290738 out.go:345] Setting OutFile to fd 1 ...
	I0407 14:02:05.967250  290738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:02:05.967304  290738 out.go:358] Setting ErrFile to fd 2...
	I0407 14:02:05.967322  290738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:02:05.967763  290738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 14:02:05.968729  290738 out.go:352] Setting JSON to false
	I0407 14:02:05.969621  290738 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":20673,"bootTime":1744013853,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 14:02:05.969729  290738 start.go:139] virtualization: kvm guest
	I0407 14:02:05.971498  290738 out.go:177] * [flannel-471753] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 14:02:05.973187  290738 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 14:02:05.973183  290738 notify.go:220] Checking for updates...
	I0407 14:02:05.975495  290738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 14:02:05.976721  290738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:02:05.977990  290738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:02:05.979157  290738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 14:02:05.980335  290738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 14:02:05.981831  290738 config.go:182] Loaded profile config "auto-471753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:02:05.981914  290738 config.go:182] Loaded profile config "cert-expiration-837665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:02:05.982019  290738 config.go:182] Loaded profile config "pause-440331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:02:05.982142  290738 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 14:02:06.021005  290738 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 14:02:06.022287  290738 start.go:297] selected driver: kvm2
	I0407 14:02:06.022310  290738 start.go:901] validating driver "kvm2" against <nil>
	I0407 14:02:06.022328  290738 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 14:02:06.023363  290738 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:02:06.023483  290738 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 14:02:06.040778  290738 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 14:02:06.040825  290738 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 14:02:06.041066  290738 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 14:02:06.041105  290738 cni.go:84] Creating CNI manager for "flannel"
	I0407 14:02:06.041111  290738 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0407 14:02:06.041169  290738 start.go:340] cluster config:
	{Name:flannel-471753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-471753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:02:06.041276  290738 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:02:06.043262  290738 out.go:177] * Starting "flannel-471753" primary control-plane node in "flannel-471753" cluster
	I0407 14:02:06.044600  290738 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 14:02:06.044661  290738 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 14:02:06.044673  290738 cache.go:56] Caching tarball of preloaded images
	I0407 14:02:06.044767  290738 preload.go:172] Found /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 14:02:06.044783  290738 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 14:02:06.044884  290738 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/config.json ...
	I0407 14:02:06.044906  290738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/config.json: {Name:mk9677baab9e7158ddf62f9b110ec8fdecc281c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:02:06.045106  290738 start.go:360] acquireMachinesLock for flannel-471753: {Name:mkbc0d9211b04d7c322a45485d144adcd6ee59fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:02:06.045151  290738 start.go:364] duration metric: took 26.466µs to acquireMachinesLock for "flannel-471753"
	I0407 14:02:06.045181  290738 start.go:93] Provisioning new machine with config: &{Name:flannel-471753 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterN
ame:flannel-471753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 14:02:06.045240  290738 start.go:125] createHost starting for "" (driver="kvm2")
	I0407 14:02:03.850459  290166 pod_ready.go:103] pod "etcd-pause-440331" in "kube-system" namespace has status "Ready":"False"
	I0407 14:02:06.348195  290166 pod_ready.go:103] pod "etcd-pause-440331" in "kube-system" namespace has status "Ready":"False"
	I0407 14:02:06.047432  290738 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0407 14:02:06.047644  290738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:02:06.047704  290738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:02:06.063484  290738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44313
	I0407 14:02:06.064061  290738 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:02:06.064676  290738 main.go:141] libmachine: Using API Version  1
	I0407 14:02:06.064706  290738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:02:06.065050  290738 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:02:06.065220  290738 main.go:141] libmachine: (flannel-471753) Calling .GetMachineName
	I0407 14:02:06.065385  290738 main.go:141] libmachine: (flannel-471753) Calling .DriverName
	I0407 14:02:06.065577  290738 start.go:159] libmachine.API.Create for "flannel-471753" (driver="kvm2")
	I0407 14:02:06.065611  290738 client.go:168] LocalClient.Create starting
	I0407 14:02:06.065647  290738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem
	I0407 14:02:06.065685  290738 main.go:141] libmachine: Decoding PEM data...
	I0407 14:02:06.065706  290738 main.go:141] libmachine: Parsing certificate...
	I0407 14:02:06.065794  290738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem
	I0407 14:02:06.065826  290738 main.go:141] libmachine: Decoding PEM data...
	I0407 14:02:06.065844  290738 main.go:141] libmachine: Parsing certificate...
	I0407 14:02:06.065870  290738 main.go:141] libmachine: Running pre-create checks...
	I0407 14:02:06.065886  290738 main.go:141] libmachine: (flannel-471753) Calling .PreCreateCheck
	I0407 14:02:06.066228  290738 main.go:141] libmachine: (flannel-471753) Calling .GetConfigRaw
	I0407 14:02:06.066610  290738 main.go:141] libmachine: Creating machine...
	I0407 14:02:06.066625  290738 main.go:141] libmachine: (flannel-471753) Calling .Create
	I0407 14:02:06.066791  290738 main.go:141] libmachine: (flannel-471753) creating KVM machine...
	I0407 14:02:06.066806  290738 main.go:141] libmachine: (flannel-471753) creating network...
	I0407 14:02:06.067947  290738 main.go:141] libmachine: (flannel-471753) DBG | found existing default KVM network
	I0407 14:02:06.068866  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:06.068717  290761 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c4:e1:02} reservation:<nil>}
	I0407 14:02:06.069801  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:06.069710  290761 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000209a50}
	I0407 14:02:06.069829  290738 main.go:141] libmachine: (flannel-471753) DBG | created network xml: 
	I0407 14:02:06.069845  290738 main.go:141] libmachine: (flannel-471753) DBG | <network>
	I0407 14:02:06.069861  290738 main.go:141] libmachine: (flannel-471753) DBG |   <name>mk-flannel-471753</name>
	I0407 14:02:06.069881  290738 main.go:141] libmachine: (flannel-471753) DBG |   <dns enable='no'/>
	I0407 14:02:06.069895  290738 main.go:141] libmachine: (flannel-471753) DBG |   
	I0407 14:02:06.069904  290738 main.go:141] libmachine: (flannel-471753) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0407 14:02:06.069921  290738 main.go:141] libmachine: (flannel-471753) DBG |     <dhcp>
	I0407 14:02:06.069928  290738 main.go:141] libmachine: (flannel-471753) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0407 14:02:06.069934  290738 main.go:141] libmachine: (flannel-471753) DBG |     </dhcp>
	I0407 14:02:06.069943  290738 main.go:141] libmachine: (flannel-471753) DBG |   </ip>
	I0407 14:02:06.069947  290738 main.go:141] libmachine: (flannel-471753) DBG |   
	I0407 14:02:06.069952  290738 main.go:141] libmachine: (flannel-471753) DBG | </network>
	I0407 14:02:06.069955  290738 main.go:141] libmachine: (flannel-471753) DBG | 
	I0407 14:02:06.075366  290738 main.go:141] libmachine: (flannel-471753) DBG | trying to create private KVM network mk-flannel-471753 192.168.50.0/24...
	I0407 14:02:06.151834  290738 main.go:141] libmachine: (flannel-471753) DBG | private KVM network mk-flannel-471753 192.168.50.0/24 created
	I0407 14:02:06.151868  290738 main.go:141] libmachine: (flannel-471753) setting up store path in /home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753 ...
	I0407 14:02:06.151889  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:06.151809  290761 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:02:06.151908  290738 main.go:141] libmachine: (flannel-471753) building disk image from file:///home/jenkins/minikube-integration/20598-242355/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 14:02:06.152007  290738 main.go:141] libmachine: (flannel-471753) Downloading /home/jenkins/minikube-integration/20598-242355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20598-242355/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 14:02:06.433560  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:06.433424  290761 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753/id_rsa...
	I0407 14:02:06.834559  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:06.834408  290761 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753/flannel-471753.rawdisk...
	I0407 14:02:06.834592  290738 main.go:141] libmachine: (flannel-471753) DBG | Writing magic tar header
	I0407 14:02:06.834611  290738 main.go:141] libmachine: (flannel-471753) DBG | Writing SSH key tar header
	I0407 14:02:06.834624  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:06.834535  290761 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753 ...
	I0407 14:02:06.834637  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753
	I0407 14:02:06.834648  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube/machines
	I0407 14:02:06.834660  290738 main.go:141] libmachine: (flannel-471753) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753 (perms=drwx------)
	I0407 14:02:06.834669  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:02:06.834682  290738 main.go:141] libmachine: (flannel-471753) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube/machines (perms=drwxr-xr-x)
	I0407 14:02:06.834701  290738 main.go:141] libmachine: (flannel-471753) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube (perms=drwxr-xr-x)
	I0407 14:02:06.834714  290738 main.go:141] libmachine: (flannel-471753) setting executable bit set on /home/jenkins/minikube-integration/20598-242355 (perms=drwxrwxr-x)
	I0407 14:02:06.834724  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355
	I0407 14:02:06.834738  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0407 14:02:06.834751  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home/jenkins
	I0407 14:02:06.834760  290738 main.go:141] libmachine: (flannel-471753) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0407 14:02:06.834774  290738 main.go:141] libmachine: (flannel-471753) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0407 14:02:06.834785  290738 main.go:141] libmachine: (flannel-471753) creating domain...
	I0407 14:02:06.834822  290738 main.go:141] libmachine: (flannel-471753) DBG | checking permissions on dir: /home
	I0407 14:02:06.834844  290738 main.go:141] libmachine: (flannel-471753) DBG | skipping /home - not owner
	I0407 14:02:06.835983  290738 main.go:141] libmachine: (flannel-471753) define libvirt domain using xml: 
	I0407 14:02:06.836012  290738 main.go:141] libmachine: (flannel-471753) <domain type='kvm'>
	I0407 14:02:06.836022  290738 main.go:141] libmachine: (flannel-471753)   <name>flannel-471753</name>
	I0407 14:02:06.836034  290738 main.go:141] libmachine: (flannel-471753)   <memory unit='MiB'>3072</memory>
	I0407 14:02:06.836043  290738 main.go:141] libmachine: (flannel-471753)   <vcpu>2</vcpu>
	I0407 14:02:06.836049  290738 main.go:141] libmachine: (flannel-471753)   <features>
	I0407 14:02:06.836056  290738 main.go:141] libmachine: (flannel-471753)     <acpi/>
	I0407 14:02:06.836064  290738 main.go:141] libmachine: (flannel-471753)     <apic/>
	I0407 14:02:06.836080  290738 main.go:141] libmachine: (flannel-471753)     <pae/>
	I0407 14:02:06.836086  290738 main.go:141] libmachine: (flannel-471753)     
	I0407 14:02:06.836093  290738 main.go:141] libmachine: (flannel-471753)   </features>
	I0407 14:02:06.836100  290738 main.go:141] libmachine: (flannel-471753)   <cpu mode='host-passthrough'>
	I0407 14:02:06.836139  290738 main.go:141] libmachine: (flannel-471753)   
	I0407 14:02:06.836168  290738 main.go:141] libmachine: (flannel-471753)   </cpu>
	I0407 14:02:06.836181  290738 main.go:141] libmachine: (flannel-471753)   <os>
	I0407 14:02:06.836189  290738 main.go:141] libmachine: (flannel-471753)     <type>hvm</type>
	I0407 14:02:06.836199  290738 main.go:141] libmachine: (flannel-471753)     <boot dev='cdrom'/>
	I0407 14:02:06.836214  290738 main.go:141] libmachine: (flannel-471753)     <boot dev='hd'/>
	I0407 14:02:06.836226  290738 main.go:141] libmachine: (flannel-471753)     <bootmenu enable='no'/>
	I0407 14:02:06.836235  290738 main.go:141] libmachine: (flannel-471753)   </os>
	I0407 14:02:06.836245  290738 main.go:141] libmachine: (flannel-471753)   <devices>
	I0407 14:02:06.836256  290738 main.go:141] libmachine: (flannel-471753)     <disk type='file' device='cdrom'>
	I0407 14:02:06.836271  290738 main.go:141] libmachine: (flannel-471753)       <source file='/home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753/boot2docker.iso'/>
	I0407 14:02:06.836286  290738 main.go:141] libmachine: (flannel-471753)       <target dev='hdc' bus='scsi'/>
	I0407 14:02:06.836297  290738 main.go:141] libmachine: (flannel-471753)       <readonly/>
	I0407 14:02:06.836319  290738 main.go:141] libmachine: (flannel-471753)     </disk>
	I0407 14:02:06.836334  290738 main.go:141] libmachine: (flannel-471753)     <disk type='file' device='disk'>
	I0407 14:02:06.836351  290738 main.go:141] libmachine: (flannel-471753)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0407 14:02:06.836367  290738 main.go:141] libmachine: (flannel-471753)       <source file='/home/jenkins/minikube-integration/20598-242355/.minikube/machines/flannel-471753/flannel-471753.rawdisk'/>
	I0407 14:02:06.836379  290738 main.go:141] libmachine: (flannel-471753)       <target dev='hda' bus='virtio'/>
	I0407 14:02:06.836434  290738 main.go:141] libmachine: (flannel-471753)     </disk>
	I0407 14:02:06.836460  290738 main.go:141] libmachine: (flannel-471753)     <interface type='network'>
	I0407 14:02:06.836488  290738 main.go:141] libmachine: (flannel-471753)       <source network='mk-flannel-471753'/>
	I0407 14:02:06.836514  290738 main.go:141] libmachine: (flannel-471753)       <model type='virtio'/>
	I0407 14:02:06.836525  290738 main.go:141] libmachine: (flannel-471753)     </interface>
	I0407 14:02:06.836532  290738 main.go:141] libmachine: (flannel-471753)     <interface type='network'>
	I0407 14:02:06.836543  290738 main.go:141] libmachine: (flannel-471753)       <source network='default'/>
	I0407 14:02:06.836547  290738 main.go:141] libmachine: (flannel-471753)       <model type='virtio'/>
	I0407 14:02:06.836557  290738 main.go:141] libmachine: (flannel-471753)     </interface>
	I0407 14:02:06.836578  290738 main.go:141] libmachine: (flannel-471753)     <serial type='pty'>
	I0407 14:02:06.836597  290738 main.go:141] libmachine: (flannel-471753)       <target port='0'/>
	I0407 14:02:06.836614  290738 main.go:141] libmachine: (flannel-471753)     </serial>
	I0407 14:02:06.836625  290738 main.go:141] libmachine: (flannel-471753)     <console type='pty'>
	I0407 14:02:06.836635  290738 main.go:141] libmachine: (flannel-471753)       <target type='serial' port='0'/>
	I0407 14:02:06.836643  290738 main.go:141] libmachine: (flannel-471753)     </console>
	I0407 14:02:06.836653  290738 main.go:141] libmachine: (flannel-471753)     <rng model='virtio'>
	I0407 14:02:06.836660  290738 main.go:141] libmachine: (flannel-471753)       <backend model='random'>/dev/random</backend>
	I0407 14:02:06.836667  290738 main.go:141] libmachine: (flannel-471753)     </rng>
	I0407 14:02:06.836675  290738 main.go:141] libmachine: (flannel-471753)     
	I0407 14:02:06.836688  290738 main.go:141] libmachine: (flannel-471753)     
	I0407 14:02:06.836710  290738 main.go:141] libmachine: (flannel-471753)   </devices>
	I0407 14:02:06.836735  290738 main.go:141] libmachine: (flannel-471753) </domain>
	I0407 14:02:06.836750  290738 main.go:141] libmachine: (flannel-471753) 
	I0407 14:02:06.841125  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:a0:0b:86 in network default
	I0407 14:02:06.842114  290738 main.go:141] libmachine: (flannel-471753) starting domain...
	I0407 14:02:06.842138  290738 main.go:141] libmachine: (flannel-471753) ensuring networks are active...
	I0407 14:02:06.842148  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:06.843181  290738 main.go:141] libmachine: (flannel-471753) Ensuring network default is active
	I0407 14:02:06.843500  290738 main.go:141] libmachine: (flannel-471753) Ensuring network mk-flannel-471753 is active
	I0407 14:02:06.844007  290738 main.go:141] libmachine: (flannel-471753) getting domain XML...
	I0407 14:02:06.844821  290738 main.go:141] libmachine: (flannel-471753) creating domain...
	I0407 14:02:08.117673  290738 main.go:141] libmachine: (flannel-471753) waiting for IP...
	I0407 14:02:08.118600  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:08.119043  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:08.119094  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:08.119043  290761 retry.go:31] will retry after 267.143316ms: waiting for domain to come up
	I0407 14:02:08.387368  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:08.387985  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:08.388010  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:08.387951  290761 retry.go:31] will retry after 288.53872ms: waiting for domain to come up
	I0407 14:02:08.678544  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:08.679084  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:08.679111  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:08.679036  290761 retry.go:31] will retry after 487.196115ms: waiting for domain to come up
	I0407 14:02:09.167804  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:09.168374  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:09.168396  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:09.168354  290761 retry.go:31] will retry after 383.713176ms: waiting for domain to come up
	I0407 14:02:09.553845  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:09.554421  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:09.554485  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:09.554383  290761 retry.go:31] will retry after 507.623444ms: waiting for domain to come up
	I0407 14:02:10.064065  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:10.064628  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:10.064673  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:10.064590  290761 retry.go:31] will retry after 755.704153ms: waiting for domain to come up
	I0407 14:02:10.821542  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:10.822041  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:10.822099  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:10.822011  290761 retry.go:31] will retry after 932.523671ms: waiting for domain to come up
	I0407 14:02:08.350888  290166 pod_ready.go:103] pod "etcd-pause-440331" in "kube-system" namespace has status "Ready":"False"
	I0407 14:02:09.850104  290166 pod_ready.go:93] pod "etcd-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:09.850128  290166 pod_ready.go:82] duration metric: took 8.007036235s for pod "etcd-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:09.850137  290166 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:11.856832  290166 pod_ready.go:93] pod "kube-apiserver-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:11.856860  290166 pod_ready.go:82] duration metric: took 2.006716587s for pod "kube-apiserver-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:11.856872  290166 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:13.862872  290166 pod_ready.go:93] pod "kube-controller-manager-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:13.862905  290166 pod_ready.go:82] duration metric: took 2.006022919s for pod "kube-controller-manager-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:13.862920  290166 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-42rwd" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:13.869050  290166 pod_ready.go:93] pod "kube-proxy-42rwd" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:13.869080  290166 pod_ready.go:82] duration metric: took 6.151389ms for pod "kube-proxy-42rwd" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:13.869095  290166 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:13.873451  290166 pod_ready.go:93] pod "kube-scheduler-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:13.873468  290166 pod_ready.go:82] duration metric: took 4.36558ms for pod "kube-scheduler-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:13.873474  290166 pod_ready.go:39] duration metric: took 12.039057001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:02:13.873493  290166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 14:02:13.893988  290166 ops.go:34] apiserver oom_adj: -16
	I0407 14:02:13.894018  290166 kubeadm.go:597] duration metric: took 30.443283794s to restartPrimaryControlPlane
	I0407 14:02:13.894031  290166 kubeadm.go:394] duration metric: took 30.604408284s to StartCluster
	I0407 14:02:13.894055  290166 settings.go:142] acquiring lock: {Name:mk4f0a46db7c57f47f856bd845390df879e08200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:02:13.894150  290166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:02:13.895242  290166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/kubeconfig: {Name:mkef4208e7f217ec5ec7c15cd00232eac7047b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:02:13.895546  290166 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.76 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 14:02:13.895847  290166 config.go:182] Loaded profile config "pause-440331": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:02:13.895802  290166 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 14:02:13.897142  290166 out.go:177] * Enabled addons: 
	I0407 14:02:13.897156  290166 out.go:177] * Verifying Kubernetes components...
	I0407 14:02:11.756758  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:11.757284  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:11.757314  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:11.757252  290761 retry.go:31] will retry after 1.037501795s: waiting for domain to come up
	I0407 14:02:12.796052  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:12.796604  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:12.796629  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:12.796576  290761 retry.go:31] will retry after 1.411367229s: waiting for domain to come up
	I0407 14:02:14.209694  290738 main.go:141] libmachine: (flannel-471753) DBG | domain flannel-471753 has defined MAC address 52:54:00:97:94:55 in network mk-flannel-471753
	I0407 14:02:14.210272  290738 main.go:141] libmachine: (flannel-471753) DBG | unable to find current IP address of domain flannel-471753 in network mk-flannel-471753
	I0407 14:02:14.210300  290738 main.go:141] libmachine: (flannel-471753) DBG | I0407 14:02:14.210239  290761 retry.go:31] will retry after 2.109587737s: waiting for domain to come up
	I0407 14:02:13.898327  290166 addons.go:514] duration metric: took 2.532305ms for enable addons: enabled=[]
	I0407 14:02:13.898372  290166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:02:14.085076  290166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:02:14.104212  290166 node_ready.go:35] waiting up to 6m0s for node "pause-440331" to be "Ready" ...
	I0407 14:02:14.107358  290166 node_ready.go:49] node "pause-440331" has status "Ready":"True"
	I0407 14:02:14.107380  290166 node_ready.go:38] duration metric: took 3.137183ms for node "pause-440331" to be "Ready" ...
	I0407 14:02:14.107389  290166 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:02:14.110474  290166 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-mtscb" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:14.115245  290166 pod_ready.go:93] pod "coredns-668d6bf9bc-mtscb" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:14.115266  290166 pod_ready.go:82] duration metric: took 4.767143ms for pod "coredns-668d6bf9bc-mtscb" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:14.115277  290166 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:14.261170  290166 pod_ready.go:93] pod "etcd-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:14.261202  290166 pod_ready.go:82] duration metric: took 145.917612ms for pod "etcd-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:14.261215  290166 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:14.660215  290166 pod_ready.go:93] pod "kube-apiserver-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:14.660248  290166 pod_ready.go:82] duration metric: took 399.024123ms for pod "kube-apiserver-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:14.660264  290166 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:15.060099  290166 pod_ready.go:93] pod "kube-controller-manager-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:15.060132  290166 pod_ready.go:82] duration metric: took 399.858ms for pod "kube-controller-manager-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:15.060150  290166 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-42rwd" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:15.461977  290166 pod_ready.go:93] pod "kube-proxy-42rwd" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:15.462008  290166 pod_ready.go:82] duration metric: took 401.851187ms for pod "kube-proxy-42rwd" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:15.462019  290166 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:15.861779  290166 pod_ready.go:93] pod "kube-scheduler-pause-440331" in "kube-system" namespace has status "Ready":"True"
	I0407 14:02:15.861808  290166 pod_ready.go:82] duration metric: took 399.781438ms for pod "kube-scheduler-pause-440331" in "kube-system" namespace to be "Ready" ...
	I0407 14:02:15.861816  290166 pod_ready.go:39] duration metric: took 1.75441794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:02:15.861832  290166 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:02:15.861889  290166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:02:15.877325  290166 api_server.go:72] duration metric: took 1.981749278s to wait for apiserver process to appear ...
	I0407 14:02:15.877351  290166 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:02:15.877368  290166 api_server.go:253] Checking apiserver healthz at https://192.168.61.76:8443/healthz ...
	I0407 14:02:15.883110  290166 api_server.go:279] https://192.168.61.76:8443/healthz returned 200:
	ok
	I0407 14:02:15.884313  290166 api_server.go:141] control plane version: v1.32.2
	I0407 14:02:15.884341  290166 api_server.go:131] duration metric: took 6.982088ms to wait for apiserver health ...
	I0407 14:02:15.884352  290166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:02:16.062244  290166 system_pods.go:59] 6 kube-system pods found
	I0407 14:02:16.062281  290166 system_pods.go:61] "coredns-668d6bf9bc-mtscb" [c192111e-3f24-4700-bf04-3a82f48faa32] Running
	I0407 14:02:16.062288  290166 system_pods.go:61] "etcd-pause-440331" [d84f0b9d-be6b-40a8-a298-311e44f20bc5] Running
	I0407 14:02:16.062294  290166 system_pods.go:61] "kube-apiserver-pause-440331" [3dd8017f-c6a3-42f0-a77c-4088c6c70332] Running
	I0407 14:02:16.062299  290166 system_pods.go:61] "kube-controller-manager-pause-440331" [371a0a83-fbb0-4128-b81f-620e2b82df28] Running
	I0407 14:02:16.062305  290166 system_pods.go:61] "kube-proxy-42rwd" [e593e76c-63b2-4de2-9d53-98aae3fa045f] Running
	I0407 14:02:16.062311  290166 system_pods.go:61] "kube-scheduler-pause-440331" [249a64fd-014a-40c3-b85d-6889b1c740ee] Running
	I0407 14:02:16.062319  290166 system_pods.go:74] duration metric: took 177.959241ms to wait for pod list to return data ...
	I0407 14:02:16.062335  290166 default_sa.go:34] waiting for default service account to be created ...
	I0407 14:02:16.260844  290166 default_sa.go:45] found service account: "default"
	I0407 14:02:16.260885  290166 default_sa.go:55] duration metric: took 198.531104ms for default service account to be created ...
	I0407 14:02:16.260898  290166 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 14:02:16.462172  290166 system_pods.go:86] 6 kube-system pods found
	I0407 14:02:16.462213  290166 system_pods.go:89] "coredns-668d6bf9bc-mtscb" [c192111e-3f24-4700-bf04-3a82f48faa32] Running
	I0407 14:02:16.462223  290166 system_pods.go:89] "etcd-pause-440331" [d84f0b9d-be6b-40a8-a298-311e44f20bc5] Running
	I0407 14:02:16.462230  290166 system_pods.go:89] "kube-apiserver-pause-440331" [3dd8017f-c6a3-42f0-a77c-4088c6c70332] Running
	I0407 14:02:16.462237  290166 system_pods.go:89] "kube-controller-manager-pause-440331" [371a0a83-fbb0-4128-b81f-620e2b82df28] Running
	I0407 14:02:16.462244  290166 system_pods.go:89] "kube-proxy-42rwd" [e593e76c-63b2-4de2-9d53-98aae3fa045f] Running
	I0407 14:02:16.462249  290166 system_pods.go:89] "kube-scheduler-pause-440331" [249a64fd-014a-40c3-b85d-6889b1c740ee] Running
	I0407 14:02:16.462260  290166 system_pods.go:126] duration metric: took 201.35396ms to wait for k8s-apps to be running ...
	I0407 14:02:16.462274  290166 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 14:02:16.462334  290166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:02:16.483585  290166 system_svc.go:56] duration metric: took 21.299882ms WaitForService to wait for kubelet
	I0407 14:02:16.483623  290166 kubeadm.go:582] duration metric: took 2.588051364s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 14:02:16.483654  290166 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:02:16.661443  290166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:02:16.661469  290166 node_conditions.go:123] node cpu capacity is 2
	I0407 14:02:16.661481  290166 node_conditions.go:105] duration metric: took 177.821391ms to run NodePressure ...
	I0407 14:02:16.661493  290166 start.go:241] waiting for startup goroutines ...
	I0407 14:02:16.661499  290166 start.go:246] waiting for cluster config update ...
	I0407 14:02:16.661506  290166 start.go:255] writing updated cluster config ...
	I0407 14:02:16.661822  290166 ssh_runner.go:195] Run: rm -f paused
	I0407 14:02:16.716691  290166 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 14:02:16.718841  290166 out.go:177] * Done! kubectl is now configured to use "pause-440331" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.726363288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034539726335886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc4178fb-b09e-459d-b42b-2980a98c2b8b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.727291498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83fc94e0-fe9a-4def-813f-e119e7a27ef0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.727358939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83fc94e0-fe9a-4def-813f-e119e7a27ef0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.727677126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7af811745df0f25cfc520cfb43cb4bae41af0065f64d3a9c00213734e289f59,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744034516472865101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ffa8a6583f46b23fe9546404e941c62137cfa35b7cea3eba15338ae23616ab,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744034516491484957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c230be78d02592bf2150f8706a80b3aac7c8456b2e05620b54dc512d85a0bff5,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744034516447455804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c447fdbe487f6abb45b934304ee59179a63ba6aa692fcaa78fedad9ac77dbeeb,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744034516459744417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3237ab3c805db26cb3d5a85e628040ded3a1d2fe1dfe7e1c559296b1adbcdb09,PodSandboxId:e714b418e0969e9e15904bbbd292f8f32a81f58fa495d002d017e2b46c85f048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744034502716497813,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3f5089a63418f8a0e5836b21c290c812ff7a6d0ea8de6e324986224e08c15c,PodSandboxId:215ae343dceba4e9c6ddb3aada2c57d52107422eae2d7842eeebf9e7cc7d697d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744034503601787996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744034502787394272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744034502596390720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.
container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744034502572872877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5
aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744034502497747021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b,PodSandboxId:cdb797faa7c3b64f1a21f0e3df9fc164377fd2504bda83274c695191258115e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744034458038330116,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750,PodSandboxId:8404038eb30c6d73efd2805f05a1938e4c28560a936bbfbbf1d25e044060c175,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744034457536791157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83fc94e0-fe9a-4def-813f-e119e7a27ef0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.786063325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81e13e48-9480-4500-84d4-97a890f0702f name=/runtime.v1.RuntimeService/Version
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.786143682Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81e13e48-9480-4500-84d4-97a890f0702f name=/runtime.v1.RuntimeService/Version
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.787438817Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c06ec80d-5e7c-4aba-9f38-27c7b8197b20 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.787781236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034539787759359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c06ec80d-5e7c-4aba-9f38-27c7b8197b20 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.789128121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ca20e00-1464-42db-a675-0e4c7c83df51 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.789184477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ca20e00-1464-42db-a675-0e4c7c83df51 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.789436322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7af811745df0f25cfc520cfb43cb4bae41af0065f64d3a9c00213734e289f59,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744034516472865101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ffa8a6583f46b23fe9546404e941c62137cfa35b7cea3eba15338ae23616ab,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744034516491484957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c230be78d02592bf2150f8706a80b3aac7c8456b2e05620b54dc512d85a0bff5,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744034516447455804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c447fdbe487f6abb45b934304ee59179a63ba6aa692fcaa78fedad9ac77dbeeb,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744034516459744417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3237ab3c805db26cb3d5a85e628040ded3a1d2fe1dfe7e1c559296b1adbcdb09,PodSandboxId:e714b418e0969e9e15904bbbd292f8f32a81f58fa495d002d017e2b46c85f048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744034502716497813,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3f5089a63418f8a0e5836b21c290c812ff7a6d0ea8de6e324986224e08c15c,PodSandboxId:215ae343dceba4e9c6ddb3aada2c57d52107422eae2d7842eeebf9e7cc7d697d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744034503601787996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744034502787394272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744034502596390720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.
container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744034502572872877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5
aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744034502497747021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b,PodSandboxId:cdb797faa7c3b64f1a21f0e3df9fc164377fd2504bda83274c695191258115e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744034458038330116,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750,PodSandboxId:8404038eb30c6d73efd2805f05a1938e4c28560a936bbfbbf1d25e044060c175,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744034457536791157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ca20e00-1464-42db-a675-0e4c7c83df51 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.839084737Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff866da9-77a5-42c3-a05a-b9f8c0dd6c45 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.839204016Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff866da9-77a5-42c3-a05a-b9f8c0dd6c45 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.840219705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a28d5e2e-609e-4de9-b182-24f62f2376e9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.840616346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034539840593661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a28d5e2e-609e-4de9-b182-24f62f2376e9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.841295722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e6bb0b3-989e-4a9e-b662-2de28d0d8055 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.841347031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e6bb0b3-989e-4a9e-b662-2de28d0d8055 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.841650029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7af811745df0f25cfc520cfb43cb4bae41af0065f64d3a9c00213734e289f59,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744034516472865101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ffa8a6583f46b23fe9546404e941c62137cfa35b7cea3eba15338ae23616ab,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744034516491484957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c230be78d02592bf2150f8706a80b3aac7c8456b2e05620b54dc512d85a0bff5,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744034516447455804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c447fdbe487f6abb45b934304ee59179a63ba6aa692fcaa78fedad9ac77dbeeb,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744034516459744417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3237ab3c805db26cb3d5a85e628040ded3a1d2fe1dfe7e1c559296b1adbcdb09,PodSandboxId:e714b418e0969e9e15904bbbd292f8f32a81f58fa495d002d017e2b46c85f048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744034502716497813,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3f5089a63418f8a0e5836b21c290c812ff7a6d0ea8de6e324986224e08c15c,PodSandboxId:215ae343dceba4e9c6ddb3aada2c57d52107422eae2d7842eeebf9e7cc7d697d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744034503601787996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744034502787394272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744034502596390720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.
container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744034502572872877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5
aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744034502497747021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b,PodSandboxId:cdb797faa7c3b64f1a21f0e3df9fc164377fd2504bda83274c695191258115e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744034458038330116,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750,PodSandboxId:8404038eb30c6d73efd2805f05a1938e4c28560a936bbfbbf1d25e044060c175,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744034457536791157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e6bb0b3-989e-4a9e-b662-2de28d0d8055 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.893529045Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a51782e-c3f0-49af-8030-ad6fed3cc32f name=/runtime.v1.RuntimeService/Version
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.893634177Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a51782e-c3f0-49af-8030-ad6fed3cc32f name=/runtime.v1.RuntimeService/Version
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.895509883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edb6cb29-95f9-4913-8aa5-a40fcd46bbb9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.895886236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034539895861531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edb6cb29-95f9-4913-8aa5-a40fcd46bbb9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.897277736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f25f9f01-fc6a-4e56-bdbd-3ce92487156b name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.897442912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f25f9f01-fc6a-4e56-bdbd-3ce92487156b name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:02:19 pause-440331 crio[2406]: time="2025-04-07 14:02:19.897964441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7af811745df0f25cfc520cfb43cb4bae41af0065f64d3a9c00213734e289f59,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744034516472865101,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6ffa8a6583f46b23fe9546404e941c62137cfa35b7cea3eba15338ae23616ab,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744034516491484957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c230be78d02592bf2150f8706a80b3aac7c8456b2e05620b54dc512d85a0bff5,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744034516447455804,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c447fdbe487f6abb45b934304ee59179a63ba6aa692fcaa78fedad9ac77dbeeb,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744034516459744417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3237ab3c805db26cb3d5a85e628040ded3a1d2fe1dfe7e1c559296b1adbcdb09,PodSandboxId:e714b418e0969e9e15904bbbd292f8f32a81f58fa495d002d017e2b46c85f048,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744034502716497813,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3f5089a63418f8a0e5836b21c290c812ff7a6d0ea8de6e324986224e08c15c,PodSandboxId:215ae343dceba4e9c6ddb3aada2c57d52107422eae2d7842eeebf9e7cc7d697d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744034503601787996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9,PodSandboxId:4adaabdab2b47e6ba07ef0df53337457b512bf3811af2ea182d0eb61ea1840b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_EXITED,CreatedAt:1744034502787394272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fea2817ea704715f2d091c12447e5ea6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240,PodSandboxId:8b2cfd21bab180a24bfbec1a933a5c8edafae007897ab64f2d380ee6e54e8c2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_EXITED,CreatedAt:1744034502596390720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee07b87c4eab396cbc24c24ef698433,},Annotations:map[string]string{io.kubernetes.
container.hash: 7745040f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0,PodSandboxId:eeefa0e3af071884a29d34d1cc0854d980a5d6b82ed2d7a017480ca35072e618,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_EXITED,CreatedAt:1744034502572872877,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31df28138b7606449a7010077633650,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5
aaea3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a,PodSandboxId:c98f97fdc30af90f0bd112eae45b9b11ff00689d4d0751497c6f9214093686d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1744034502497747021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-440331,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61a26a62e3e76b4782b67895a5c96dae,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b,PodSandboxId:cdb797faa7c3b64f1a21f0e3df9fc164377fd2504bda83274c695191258115e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1744034458038330116,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mtscb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c192111e-3f24-4700-bf04-3a82f48faa32,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750,PodSandboxId:8404038eb30c6d73efd2805f05a1938e4c28560a936bbfbbf1d25e044060c175,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_EXITED,CreatedAt:1744034457536791157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42rwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e593e76c-63b2-4de2-9d53-98aae3fa045f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f25f9f01-fc6a-4e56-bdbd-3ce92487156b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e6ffa8a6583f4       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   23 seconds ago       Running             etcd                      2                   c98f97fdc30af       etcd-pause-440331
	a7af811745df0       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   23 seconds ago       Running             kube-apiserver            2                   8b2cfd21bab18       kube-apiserver-pause-440331
	c447fdbe487f6       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   23 seconds ago       Running             kube-controller-manager   2                   4adaabdab2b47       kube-controller-manager-pause-440331
	c230be78d0259       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   23 seconds ago       Running             kube-scheduler            2                   eeefa0e3af071       kube-scheduler-pause-440331
	dc3f5089a6341       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   36 seconds ago       Running             coredns                   1                   215ae343dceba       coredns-668d6bf9bc-mtscb
	0ed17fdbbe763       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   37 seconds ago       Exited              kube-controller-manager   1                   4adaabdab2b47       kube-controller-manager-pause-440331
	3237ab3c805db       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   37 seconds ago       Running             kube-proxy                1                   e714b418e0969       kube-proxy-42rwd
	6e55d632fbe6f       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   37 seconds ago       Exited              kube-apiserver            1                   8b2cfd21bab18       kube-apiserver-pause-440331
	a50b1c7960acd       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   37 seconds ago       Exited              kube-scheduler            1                   eeefa0e3af071       kube-scheduler-pause-440331
	c5a8fa1f17c47       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   37 seconds ago       Exited              etcd                      1                   c98f97fdc30af       etcd-pause-440331
	f21fae4b13796       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   cdb797faa7c3b       coredns-668d6bf9bc-mtscb
	176ad0ba0fc50       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   About a minute ago   Exited              kube-proxy                0                   8404038eb30c6       kube-proxy-42rwd
	
	
	==> coredns [dc3f5089a63418f8a0e5836b21c290c812ff7a6d0ea8de6e324986224e08c15c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60954 - 2591 "HINFO IN 2330035401213661035.4473101970594804087. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018633422s
	
	
	==> coredns [f21fae4b13796428295826a9fd9284c77f9555d42266729b9b404e3519e4f69b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[588057979]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 14:00:58.329) (total time: 30002ms):
	Trace[588057979]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (14:01:28.332)
	Trace[588057979]: [30.002532404s] [30.002532404s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1559066714]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 14:00:58.329) (total time: 30003ms):
	Trace[1559066714]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (14:01:28.332)
	Trace[1559066714]: [30.003321056s] [30.003321056s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1458740820]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 14:00:58.330) (total time: 30001ms):
	Trace[1458740820]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:01:28.332)
	Trace[1458740820]: [30.001995304s] [30.001995304s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-440331
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-440331
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=pause-440331
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T14_00_52_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 14:00:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-440331
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 14:02:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 14:02:00 +0000   Mon, 07 Apr 2025 14:00:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 14:02:00 +0000   Mon, 07 Apr 2025 14:00:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 14:02:00 +0000   Mon, 07 Apr 2025 14:00:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 14:02:00 +0000   Mon, 07 Apr 2025 14:00:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.76
	  Hostname:    pause-440331
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7e3ac1735344f759c0120ab673dd44f
	  System UUID:                a7e3ac17-3534-4f75-9c01-20ab673dd44f
	  Boot ID:                    a4dcac2e-efd4-4f5b-bbee-1eaa23c70b5c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-mtscb                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     83s
	  kube-system                 etcd-pause-440331                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         89s
	  kube-system                 kube-apiserver-pause-440331             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-pause-440331    200m (10%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-42rwd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-pause-440331             100m (5%)     0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 81s                kube-proxy       
	  Normal  Starting                 34s                kube-proxy       
	  Normal  NodeHasSufficientPID     94s (x7 over 94s)  kubelet          Node pause-440331 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    94s (x8 over 94s)  kubelet          Node pause-440331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  94s (x8 over 94s)  kubelet          Node pause-440331 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                88s                kubelet          Node pause-440331 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    88s                kubelet          Node pause-440331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s                kubelet          Node pause-440331 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  88s                kubelet          Node pause-440331 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           84s                node-controller  Node pause-440331 event: Registered Node pause-440331 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-440331 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-440331 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-440331 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-440331 event: Registered Node pause-440331 in Controller
	
	
	==> dmesg <==
	[  +7.689732] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.063421] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064020] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.174099] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.137307] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.300362] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.419815] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +0.057862] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.820606] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.075791] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.020258] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	[  +0.078750] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.409080] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	[  +0.138546] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 7 14:01] kauditd_printk_skb: 88 callbacks suppressed
	[ +31.237944] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	[  +0.144875] systemd-fstab-generator[2345]: Ignoring "noauto" option for root device
	[  +0.201100] systemd-fstab-generator[2359]: Ignoring "noauto" option for root device
	[  +0.167996] systemd-fstab-generator[2371]: Ignoring "noauto" option for root device
	[  +0.327292] systemd-fstab-generator[2399]: Ignoring "noauto" option for root device
	[  +0.731061] systemd-fstab-generator[2526]: Ignoring "noauto" option for root device
	[ +10.361712] kauditd_printk_skb: 196 callbacks suppressed
	[  +3.597345] systemd-fstab-generator[3375]: Ignoring "noauto" option for root device
	[Apr 7 14:02] kauditd_printk_skb: 39 callbacks suppressed
	[  +7.286959] systemd-fstab-generator[3701]: Ignoring "noauto" option for root device
	
	
	==> etcd [c5a8fa1f17c47a0f1768b9a77fd4e01bfad6d192a0f3a60352ea8f27c810b26a] <==
	{"level":"info","ts":"2025-04-07T14:01:44.072075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-07T14:01:44.072117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 received MsgPreVoteResp from 9cf9907e2fa71306 at term 2"}
	{"level":"info","ts":"2025-04-07T14:01:44.072130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 became candidate at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:44.072136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 received MsgVoteResp from 9cf9907e2fa71306 at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:44.072146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 became leader at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:44.072156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9cf9907e2fa71306 elected leader 9cf9907e2fa71306 at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:44.079239Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9cf9907e2fa71306","local-member-attributes":"{Name:pause-440331 ClientURLs:[https://192.168.61.76:2379]}","request-path":"/0/members/9cf9907e2fa71306/attributes","cluster-id":"24199a2c11709dba","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T14:01:44.079333Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T14:01:44.081724Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T14:01:44.084624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T14:01:44.087430Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T14:01:44.088044Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T14:01:44.088456Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T14:01:44.088507Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T14:01:44.097732Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.76:2379"}
	{"level":"info","ts":"2025-04-07T14:01:54.052862Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-04-07T14:01:54.052990Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-440331","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.76:2380"],"advertise-client-urls":["https://192.168.61.76:2379"]}
	{"level":"warn","ts":"2025-04-07T14:01:54.053153Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-07T14:01:54.053261Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-07T14:01:54.054822Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.76:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-07T14:01:54.054874Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.76:2379: use of closed network connection"}
	{"level":"info","ts":"2025-04-07T14:01:54.054917Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9cf9907e2fa71306","current-leader-member-id":"9cf9907e2fa71306"}
	{"level":"info","ts":"2025-04-07T14:01:54.058256Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.61.76:2380"}
	{"level":"info","ts":"2025-04-07T14:01:54.058395Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.61.76:2380"}
	{"level":"info","ts":"2025-04-07T14:01:54.058406Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-440331","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.76:2380"],"advertise-client-urls":["https://192.168.61.76:2379"]}
	
	
	==> etcd [e6ffa8a6583f46b23fe9546404e941c62137cfa35b7cea3eba15338ae23616ab] <==
	{"level":"info","ts":"2025-04-07T14:01:56.857445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 switched to configuration voters=(11311230810757468934)"}
	{"level":"info","ts":"2025-04-07T14:01:56.890649Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-07T14:01:56.896017Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"24199a2c11709dba","local-member-id":"9cf9907e2fa71306","added-peer-id":"9cf9907e2fa71306","added-peer-peer-urls":["https://192.168.61.76:2380"]}
	{"level":"info","ts":"2025-04-07T14:01:56.896176Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"24199a2c11709dba","local-member-id":"9cf9907e2fa71306","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T14:01:56.896222Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T14:01:56.902311Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"9cf9907e2fa71306","initial-advertise-peer-urls":["https://192.168.61.76:2380"],"listen-peer-urls":["https://192.168.61.76:2380"],"advertise-client-urls":["https://192.168.61.76:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.76:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-07T14:01:56.902361Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-07T14:01:56.902422Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.61.76:2380"}
	{"level":"info","ts":"2025-04-07T14:01:56.902446Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.61.76:2380"}
	{"level":"info","ts":"2025-04-07T14:01:58.718798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 is starting a new election at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:58.719012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:58.719072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 received MsgPreVoteResp from 9cf9907e2fa71306 at term 3"}
	{"level":"info","ts":"2025-04-07T14:01:58.719126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 became candidate at term 4"}
	{"level":"info","ts":"2025-04-07T14:01:58.719144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 received MsgVoteResp from 9cf9907e2fa71306 at term 4"}
	{"level":"info","ts":"2025-04-07T14:01:58.719164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9cf9907e2fa71306 became leader at term 4"}
	{"level":"info","ts":"2025-04-07T14:01:58.719182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9cf9907e2fa71306 elected leader 9cf9907e2fa71306 at term 4"}
	{"level":"info","ts":"2025-04-07T14:01:58.724183Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9cf9907e2fa71306","local-member-attributes":"{Name:pause-440331 ClientURLs:[https://192.168.61.76:2379]}","request-path":"/0/members/9cf9907e2fa71306/attributes","cluster-id":"24199a2c11709dba","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T14:01:58.724190Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T14:01:58.724540Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T14:01:58.724575Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T14:01:58.724212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T14:01:58.725298Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T14:01:58.725322Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T14:01:58.725917Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T14:01:58.726548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.76:2379"}
	
	
	==> kernel <==
	 14:02:20 up 2 min,  0 users,  load average: 1.04, 0.51, 0.19
	Linux pause-440331 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6e55d632fbe6fa487a2a31c33ab92f26159a6bfe70958a506ecdc030d0680240] <==
	I0407 14:01:45.910198       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0407 14:01:45.910359       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0407 14:01:45.910574       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0407 14:01:45.913837       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0407 14:01:45.910528       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0407 14:01:45.910535       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0407 14:01:45.913817       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0407 14:01:45.914688       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0407 14:01:45.914767       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0407 14:01:46.598283       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:46.615513       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:47.599111       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:47.614991       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:48.598708       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:48.614594       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:49.598263       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:49.614328       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:50.598571       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:50.614819       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:51.599141       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:51.614652       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:52.598478       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:52.615639       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:53.598240       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0407 14:01:53.615299       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [a7af811745df0f25cfc520cfb43cb4bae41af0065f64d3a9c00213734e289f59] <==
	I0407 14:02:00.093342       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0407 14:02:00.093395       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0407 14:02:00.102790       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0407 14:02:00.110702       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 14:02:00.117175       1 aggregator.go:171] initial CRD sync complete...
	I0407 14:02:00.117214       1 autoregister_controller.go:144] Starting autoregister controller
	I0407 14:02:00.117223       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0407 14:02:00.117231       1 cache.go:39] Caches are synced for autoregister controller
	I0407 14:02:00.131425       1 shared_informer.go:320] Caches are synced for configmaps
	I0407 14:02:00.131521       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0407 14:02:00.131835       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0407 14:02:00.132272       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0407 14:02:00.131850       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0407 14:02:00.150280       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0407 14:02:00.169132       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0407 14:02:00.939024       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0407 14:02:01.004405       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	W0407 14:02:01.358759       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.76]
	I0407 14:02:01.359730       1 controller.go:615] quota admission added evaluator for: endpoints
	I0407 14:02:01.368103       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 14:02:01.692386       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 14:02:01.740861       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 14:02:01.776039       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 14:02:01.788208       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0407 14:02:07.332690       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0ed17fdbbe763322f81754f4c57a2f02863a08399162c50a375d842b1a3475d9] <==
	
	
	==> kube-controller-manager [c447fdbe487f6abb45b934304ee59179a63ba6aa692fcaa78fedad9ac77dbeeb] <==
	I0407 14:02:03.313268       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0407 14:02:03.315718       1 shared_informer.go:320] Caches are synced for TTL
	I0407 14:02:03.321292       1 shared_informer.go:320] Caches are synced for attach detach
	I0407 14:02:03.325163       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0407 14:02:03.325853       1 shared_informer.go:320] Caches are synced for daemon sets
	I0407 14:02:03.326755       1 shared_informer.go:320] Caches are synced for PV protection
	I0407 14:02:03.327164       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0407 14:02:03.327293       1 shared_informer.go:320] Caches are synced for persistent volume
	I0407 14:02:03.332661       1 shared_informer.go:320] Caches are synced for endpoint
	I0407 14:02:03.334040       1 shared_informer.go:320] Caches are synced for node
	I0407 14:02:03.334176       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0407 14:02:03.334317       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0407 14:02:03.334444       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0407 14:02:03.334587       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0407 14:02:03.334755       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-440331"
	I0407 14:02:03.336401       1 shared_informer.go:320] Caches are synced for resource quota
	I0407 14:02:03.346663       1 shared_informer.go:320] Caches are synced for HPA
	I0407 14:02:03.364127       1 shared_informer.go:320] Caches are synced for garbage collector
	I0407 14:02:03.370590       1 shared_informer.go:320] Caches are synced for garbage collector
	I0407 14:02:03.370636       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0407 14:02:03.370649       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0407 14:02:07.340496       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.308398ms"
	I0407 14:02:07.340879       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="160.358µs"
	I0407 14:02:07.364426       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="22.034232ms"
	I0407 14:02:07.364521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.026µs"
	
	
	==> kube-proxy [176ad0ba0fc50544d61073fc5011e1ba7998bf78d464a6d8d5f461513c0b2750] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 14:00:58.262109       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 14:00:58.297841       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.76"]
	E0407 14:00:58.298159       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 14:00:58.363217       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 14:00:58.363274       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 14:00:58.363299       1 server_linux.go:170] "Using iptables Proxier"
	I0407 14:00:58.366751       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 14:00:58.367572       1 server.go:497] "Version info" version="v1.32.2"
	I0407 14:00:58.367608       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 14:00:58.373119       1 config.go:199] "Starting service config controller"
	I0407 14:00:58.374198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 14:00:58.374255       1 config.go:329] "Starting node config controller"
	I0407 14:00:58.374263       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 14:00:58.375018       1 config.go:105] "Starting endpoint slice config controller"
	I0407 14:00:58.375055       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 14:00:58.475248       1 shared_informer.go:320] Caches are synced for node config
	I0407 14:00:58.475246       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 14:00:58.475267       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [3237ab3c805db26cb3d5a85e628040ded3a1d2fe1dfe7e1c559296b1adbcdb09] <==
	E0407 14:01:45.924084       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:45.924157       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:45.924194       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	E0407 14:01:45.924347       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.61.76:8443: connect: connection refused"
	W0407 14:01:47.039782       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:47.039998       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:47.422042       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:47.422101       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:47.476136       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:47.476267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:49.056438       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:49.056537       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:49.760575       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:49.760675       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:50.395075       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:50.395177       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:53.062329       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:53.062431       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-440331&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:54.884033       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:54.884130       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	W0407 14:01:56.591502       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.76:8443: connect: connection refused
	E0407 14:01:56.591570       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.76:8443: connect: connection refused" logger="UnhandledError"
	I0407 14:02:02.423805       1 shared_informer.go:320] Caches are synced for node config
	I0407 14:02:05.922854       1 shared_informer.go:320] Caches are synced for service config
	I0407 14:02:06.623168       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a50b1c7960acd6cb9d8031d9be0f54f319c6c7edbc944d32a9d0b2f143eae4a0] <==
	I0407 14:01:44.571127       1 serving.go:386] Generated self-signed cert in-memory
	W0407 14:01:45.628071       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 14:01:45.628110       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 14:01:45.628120       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 14:01:45.628198       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 14:01:45.714642       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0407 14:01:45.714683       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 14:01:45.722355       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0407 14:01:45.722564       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 14:01:45.722612       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 14:01:45.723167       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0407 14:01:45.823262       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0407 14:01:53.913800       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c230be78d02592bf2150f8706a80b3aac7c8456b2e05620b54dc512d85a0bff5] <==
	I0407 14:01:57.430689       1 serving.go:386] Generated self-signed cert in-memory
	W0407 14:02:00.028514       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 14:02:00.030065       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 14:02:00.030157       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 14:02:00.030185       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 14:02:00.126041       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0407 14:02:00.126070       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 14:02:00.130887       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0407 14:02:00.131098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 14:02:00.131120       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 14:02:00.131159       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0407 14:02:00.232205       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 07 14:01:59 pause-440331 kubelet[3382]: E0407 14:01:59.099044    3382 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-440331\" not found" node="pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.099890    3382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.100385    3382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.159158    3382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: E0407 14:02:00.191116    3382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-440331\" already exists" pod="kube-system/kube-scheduler-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: E0407 14:02:00.197313    3382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-440331\" already exists" pod="kube-system/etcd-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: E0407 14:02:00.199033    3382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-440331\" already exists" pod="kube-system/kube-apiserver-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.199082    3382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.238654    3382 kubelet_node_status.go:125] "Node was previously registered" node="pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.238748    3382 kubelet_node_status.go:79] "Successfully registered node" node="pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.238771    3382 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.240269    3382 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: E0407 14:02:00.243139    3382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-440331\" already exists" pod="kube-system/kube-controller-manager-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.243267    3382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: E0407 14:02:00.258216    3382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-440331\" already exists" pod="kube-system/kube-scheduler-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.258374    3382 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: E0407 14:02:00.278475    3382 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-440331\" already exists" pod="kube-system/etcd-pause-440331"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.929465    3382 apiserver.go:52] "Watching apiserver"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.954615    3382 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.998872    3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e593e76c-63b2-4de2-9d53-98aae3fa045f-xtables-lock\") pod \"kube-proxy-42rwd\" (UID: \"e593e76c-63b2-4de2-9d53-98aae3fa045f\") " pod="kube-system/kube-proxy-42rwd"
	Apr 07 14:02:00 pause-440331 kubelet[3382]: I0407 14:02:00.999042    3382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e593e76c-63b2-4de2-9d53-98aae3fa045f-lib-modules\") pod \"kube-proxy-42rwd\" (UID: \"e593e76c-63b2-4de2-9d53-98aae3fa045f\") " pod="kube-system/kube-proxy-42rwd"
	Apr 07 14:02:06 pause-440331 kubelet[3382]: E0407 14:02:06.091406    3382 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034526091075576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 14:02:06 pause-440331 kubelet[3382]: E0407 14:02:06.091551    3382 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034526091075576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 14:02:16 pause-440331 kubelet[3382]: E0407 14:02:16.095319    3382 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034536093827617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 07 14:02:16 pause-440331 kubelet[3382]: E0407 14:02:16.095762    3382 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744034536093827617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-440331 -n pause-440331
helpers_test.go:261: (dbg) Run:  kubectl --context pause-440331 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (48.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (335.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-405646 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-405646 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m35.074815036s)

                                                
                                                
-- stdout --
	* [old-k8s-version-405646] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-405646" primary control-plane node in "old-k8s-version-405646" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 14:04:18.466941  297880 out.go:345] Setting OutFile to fd 1 ...
	I0407 14:04:18.467103  297880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:04:18.467116  297880 out.go:358] Setting ErrFile to fd 2...
	I0407 14:04:18.467125  297880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:04:18.467311  297880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 14:04:18.467994  297880 out.go:352] Setting JSON to false
	I0407 14:04:18.469178  297880 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":20805,"bootTime":1744013853,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 14:04:18.469244  297880 start.go:139] virtualization: kvm guest
	I0407 14:04:18.471159  297880 out.go:177] * [old-k8s-version-405646] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 14:04:18.472563  297880 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 14:04:18.472592  297880 notify.go:220] Checking for updates...
	I0407 14:04:18.475009  297880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 14:04:18.476262  297880 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:04:18.477369  297880 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:04:18.478665  297880 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 14:04:18.479903  297880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 14:04:18.481483  297880 config.go:182] Loaded profile config "calico-471753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:04:18.481595  297880 config.go:182] Loaded profile config "custom-flannel-471753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:04:18.481673  297880 config.go:182] Loaded profile config "kindnet-471753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:04:18.481753  297880 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 14:04:18.521980  297880 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 14:04:18.523331  297880 start.go:297] selected driver: kvm2
	I0407 14:04:18.523350  297880 start.go:901] validating driver "kvm2" against <nil>
	I0407 14:04:18.523361  297880 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 14:04:18.524025  297880 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:04:18.524097  297880 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 14:04:18.539984  297880 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 14:04:18.540042  297880 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 14:04:18.540348  297880 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 14:04:18.540393  297880 cni.go:84] Creating CNI manager for ""
	I0407 14:04:18.540465  297880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:04:18.540478  297880 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 14:04:18.540541  297880 start.go:340] cluster config:
	{Name:old-k8s-version-405646 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:04:18.540729  297880 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:04:18.542512  297880 out.go:177] * Starting "old-k8s-version-405646" primary control-plane node in "old-k8s-version-405646" cluster
	I0407 14:04:18.544178  297880 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 14:04:18.544229  297880 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0407 14:04:18.544240  297880 cache.go:56] Caching tarball of preloaded images
	I0407 14:04:18.544377  297880 preload.go:172] Found /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 14:04:18.544399  297880 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0407 14:04:18.544545  297880 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/config.json ...
	I0407 14:04:18.544576  297880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/config.json: {Name:mk73781e24e79431662888866304d507d7d82690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:04:18.544734  297880 start.go:360] acquireMachinesLock for old-k8s-version-405646: {Name:mkbc0d9211b04d7c322a45485d144adcd6ee59fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:05:13.369513  297880 start.go:364] duration metric: took 54.824748831s to acquireMachinesLock for "old-k8s-version-405646"
	I0407 14:05:13.369593  297880 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-405646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-405646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 14:05:13.369677  297880 start.go:125] createHost starting for "" (driver="kvm2")
	I0407 14:05:13.371233  297880 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 14:05:13.371417  297880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:05:13.371480  297880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:05:13.389864  297880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33911
	I0407 14:05:13.390396  297880 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:05:13.391037  297880 main.go:141] libmachine: Using API Version  1
	I0407 14:05:13.391058  297880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:05:13.391424  297880 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:05:13.391643  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetMachineName
	I0407 14:05:13.391837  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:05:13.392009  297880 start.go:159] libmachine.API.Create for "old-k8s-version-405646" (driver="kvm2")
	I0407 14:05:13.392046  297880 client.go:168] LocalClient.Create starting
	I0407 14:05:13.392101  297880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem
	I0407 14:05:13.392150  297880 main.go:141] libmachine: Decoding PEM data...
	I0407 14:05:13.392178  297880 main.go:141] libmachine: Parsing certificate...
	I0407 14:05:13.392285  297880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem
	I0407 14:05:13.392331  297880 main.go:141] libmachine: Decoding PEM data...
	I0407 14:05:13.392350  297880 main.go:141] libmachine: Parsing certificate...
	I0407 14:05:13.392375  297880 main.go:141] libmachine: Running pre-create checks...
	I0407 14:05:13.392398  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .PreCreateCheck
	I0407 14:05:13.392819  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetConfigRaw
	I0407 14:05:13.393307  297880 main.go:141] libmachine: Creating machine...
	I0407 14:05:13.393327  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .Create
	I0407 14:05:13.393499  297880 main.go:141] libmachine: (old-k8s-version-405646) creating KVM machine...
	I0407 14:05:13.393519  297880 main.go:141] libmachine: (old-k8s-version-405646) creating network...
	I0407 14:05:13.394874  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found existing default KVM network
	I0407 14:05:13.395957  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:13.395789  298507 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b9:e2:6d} reservation:<nil>}
	I0407 14:05:13.396643  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:13.396549  298507 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:5f:c5:6c} reservation:<nil>}
	I0407 14:05:13.397422  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:13.397325  298507 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:11:95} reservation:<nil>}
	I0407 14:05:13.398506  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:13.398420  298507 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000352a50}
	I0407 14:05:13.398541  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | created network xml: 
	I0407 14:05:13.398553  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | <network>
	I0407 14:05:13.398562  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG |   <name>mk-old-k8s-version-405646</name>
	I0407 14:05:13.398571  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG |   <dns enable='no'/>
	I0407 14:05:13.398582  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG |   
	I0407 14:05:13.398597  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0407 14:05:13.398608  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG |     <dhcp>
	I0407 14:05:13.398618  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0407 14:05:13.398624  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG |     </dhcp>
	I0407 14:05:13.398633  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG |   </ip>
	I0407 14:05:13.398639  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG |   
	I0407 14:05:13.398651  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | </network>
	I0407 14:05:13.398657  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | 
	I0407 14:05:13.404253  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | trying to create private KVM network mk-old-k8s-version-405646 192.168.72.0/24...
	I0407 14:05:13.482723  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | private KVM network mk-old-k8s-version-405646 192.168.72.0/24 created
	I0407 14:05:13.482760  297880 main.go:141] libmachine: (old-k8s-version-405646) setting up store path in /home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646 ...
	I0407 14:05:13.482773  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:13.482688  298507 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:05:13.482807  297880 main.go:141] libmachine: (old-k8s-version-405646) building disk image from file:///home/jenkins/minikube-integration/20598-242355/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 14:05:13.482925  297880 main.go:141] libmachine: (old-k8s-version-405646) Downloading /home/jenkins/minikube-integration/20598-242355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20598-242355/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 14:05:13.766568  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:13.766426  298507 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa...
	I0407 14:05:13.897009  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:13.896876  298507 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/old-k8s-version-405646.rawdisk...
	I0407 14:05:13.897046  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | Writing magic tar header
	I0407 14:05:13.897069  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | Writing SSH key tar header
	I0407 14:05:13.897097  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:13.897027  298507 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646 ...
	I0407 14:05:13.897212  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646
	I0407 14:05:13.897284  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube/machines
	I0407 14:05:13.897306  297880 main.go:141] libmachine: (old-k8s-version-405646) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646 (perms=drwx------)
	I0407 14:05:13.897320  297880 main.go:141] libmachine: (old-k8s-version-405646) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube/machines (perms=drwxr-xr-x)
	I0407 14:05:13.897330  297880 main.go:141] libmachine: (old-k8s-version-405646) setting executable bit set on /home/jenkins/minikube-integration/20598-242355/.minikube (perms=drwxr-xr-x)
	I0407 14:05:13.897353  297880 main.go:141] libmachine: (old-k8s-version-405646) setting executable bit set on /home/jenkins/minikube-integration/20598-242355 (perms=drwxrwxr-x)
	I0407 14:05:13.897368  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:05:13.897378  297880 main.go:141] libmachine: (old-k8s-version-405646) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0407 14:05:13.897389  297880 main.go:141] libmachine: (old-k8s-version-405646) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0407 14:05:13.897400  297880 main.go:141] libmachine: (old-k8s-version-405646) creating domain...
	I0407 14:05:13.897414  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20598-242355
	I0407 14:05:13.897424  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0407 14:05:13.897433  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | checking permissions on dir: /home/jenkins
	I0407 14:05:13.897449  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | checking permissions on dir: /home
	I0407 14:05:13.897460  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | skipping /home - not owner
	I0407 14:05:13.898537  297880 main.go:141] libmachine: (old-k8s-version-405646) define libvirt domain using xml: 
	I0407 14:05:13.898557  297880 main.go:141] libmachine: (old-k8s-version-405646) <domain type='kvm'>
	I0407 14:05:13.898567  297880 main.go:141] libmachine: (old-k8s-version-405646)   <name>old-k8s-version-405646</name>
	I0407 14:05:13.898580  297880 main.go:141] libmachine: (old-k8s-version-405646)   <memory unit='MiB'>2200</memory>
	I0407 14:05:13.898592  297880 main.go:141] libmachine: (old-k8s-version-405646)   <vcpu>2</vcpu>
	I0407 14:05:13.898611  297880 main.go:141] libmachine: (old-k8s-version-405646)   <features>
	I0407 14:05:13.898627  297880 main.go:141] libmachine: (old-k8s-version-405646)     <acpi/>
	I0407 14:05:13.898634  297880 main.go:141] libmachine: (old-k8s-version-405646)     <apic/>
	I0407 14:05:13.898641  297880 main.go:141] libmachine: (old-k8s-version-405646)     <pae/>
	I0407 14:05:13.898653  297880 main.go:141] libmachine: (old-k8s-version-405646)     
	I0407 14:05:13.898660  297880 main.go:141] libmachine: (old-k8s-version-405646)   </features>
	I0407 14:05:13.898676  297880 main.go:141] libmachine: (old-k8s-version-405646)   <cpu mode='host-passthrough'>
	I0407 14:05:13.898687  297880 main.go:141] libmachine: (old-k8s-version-405646)   
	I0407 14:05:13.898693  297880 main.go:141] libmachine: (old-k8s-version-405646)   </cpu>
	I0407 14:05:13.898704  297880 main.go:141] libmachine: (old-k8s-version-405646)   <os>
	I0407 14:05:13.898714  297880 main.go:141] libmachine: (old-k8s-version-405646)     <type>hvm</type>
	I0407 14:05:13.898723  297880 main.go:141] libmachine: (old-k8s-version-405646)     <boot dev='cdrom'/>
	I0407 14:05:13.898733  297880 main.go:141] libmachine: (old-k8s-version-405646)     <boot dev='hd'/>
	I0407 14:05:13.898752  297880 main.go:141] libmachine: (old-k8s-version-405646)     <bootmenu enable='no'/>
	I0407 14:05:13.898772  297880 main.go:141] libmachine: (old-k8s-version-405646)   </os>
	I0407 14:05:13.898780  297880 main.go:141] libmachine: (old-k8s-version-405646)   <devices>
	I0407 14:05:13.898786  297880 main.go:141] libmachine: (old-k8s-version-405646)     <disk type='file' device='cdrom'>
	I0407 14:05:13.898824  297880 main.go:141] libmachine: (old-k8s-version-405646)       <source file='/home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/boot2docker.iso'/>
	I0407 14:05:13.898852  297880 main.go:141] libmachine: (old-k8s-version-405646)       <target dev='hdc' bus='scsi'/>
	I0407 14:05:13.898865  297880 main.go:141] libmachine: (old-k8s-version-405646)       <readonly/>
	I0407 14:05:13.898886  297880 main.go:141] libmachine: (old-k8s-version-405646)     </disk>
	I0407 14:05:13.898894  297880 main.go:141] libmachine: (old-k8s-version-405646)     <disk type='file' device='disk'>
	I0407 14:05:13.898900  297880 main.go:141] libmachine: (old-k8s-version-405646)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0407 14:05:13.898908  297880 main.go:141] libmachine: (old-k8s-version-405646)       <source file='/home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/old-k8s-version-405646.rawdisk'/>
	I0407 14:05:13.898916  297880 main.go:141] libmachine: (old-k8s-version-405646)       <target dev='hda' bus='virtio'/>
	I0407 14:05:13.898921  297880 main.go:141] libmachine: (old-k8s-version-405646)     </disk>
	I0407 14:05:13.898928  297880 main.go:141] libmachine: (old-k8s-version-405646)     <interface type='network'>
	I0407 14:05:13.898934  297880 main.go:141] libmachine: (old-k8s-version-405646)       <source network='mk-old-k8s-version-405646'/>
	I0407 14:05:13.898941  297880 main.go:141] libmachine: (old-k8s-version-405646)       <model type='virtio'/>
	I0407 14:05:13.898948  297880 main.go:141] libmachine: (old-k8s-version-405646)     </interface>
	I0407 14:05:13.898959  297880 main.go:141] libmachine: (old-k8s-version-405646)     <interface type='network'>
	I0407 14:05:13.898967  297880 main.go:141] libmachine: (old-k8s-version-405646)       <source network='default'/>
	I0407 14:05:13.898976  297880 main.go:141] libmachine: (old-k8s-version-405646)       <model type='virtio'/>
	I0407 14:05:13.898983  297880 main.go:141] libmachine: (old-k8s-version-405646)     </interface>
	I0407 14:05:13.898990  297880 main.go:141] libmachine: (old-k8s-version-405646)     <serial type='pty'>
	I0407 14:05:13.898998  297880 main.go:141] libmachine: (old-k8s-version-405646)       <target port='0'/>
	I0407 14:05:13.899005  297880 main.go:141] libmachine: (old-k8s-version-405646)     </serial>
	I0407 14:05:13.899010  297880 main.go:141] libmachine: (old-k8s-version-405646)     <console type='pty'>
	I0407 14:05:13.899017  297880 main.go:141] libmachine: (old-k8s-version-405646)       <target type='serial' port='0'/>
	I0407 14:05:13.899022  297880 main.go:141] libmachine: (old-k8s-version-405646)     </console>
	I0407 14:05:13.899029  297880 main.go:141] libmachine: (old-k8s-version-405646)     <rng model='virtio'>
	I0407 14:05:13.899040  297880 main.go:141] libmachine: (old-k8s-version-405646)       <backend model='random'>/dev/random</backend>
	I0407 14:05:13.899046  297880 main.go:141] libmachine: (old-k8s-version-405646)     </rng>
	I0407 14:05:13.899051  297880 main.go:141] libmachine: (old-k8s-version-405646)     
	I0407 14:05:13.899057  297880 main.go:141] libmachine: (old-k8s-version-405646)     
	I0407 14:05:13.899062  297880 main.go:141] libmachine: (old-k8s-version-405646)   </devices>
	I0407 14:05:13.899065  297880 main.go:141] libmachine: (old-k8s-version-405646) </domain>
	I0407 14:05:13.899075  297880 main.go:141] libmachine: (old-k8s-version-405646) 
	I0407 14:05:13.924243  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:55:01:32 in network default
	I0407 14:05:13.924959  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:13.924980  297880 main.go:141] libmachine: (old-k8s-version-405646) starting domain...
	I0407 14:05:13.924986  297880 main.go:141] libmachine: (old-k8s-version-405646) ensuring networks are active...
	I0407 14:05:13.926147  297880 main.go:141] libmachine: (old-k8s-version-405646) Ensuring network default is active
	I0407 14:05:13.926586  297880 main.go:141] libmachine: (old-k8s-version-405646) Ensuring network mk-old-k8s-version-405646 is active
	I0407 14:05:13.927321  297880 main.go:141] libmachine: (old-k8s-version-405646) getting domain XML...
	I0407 14:05:13.928219  297880 main.go:141] libmachine: (old-k8s-version-405646) creating domain...
	I0407 14:05:15.481791  297880 main.go:141] libmachine: (old-k8s-version-405646) waiting for IP...
	I0407 14:05:15.482964  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:15.483652  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:15.483678  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:15.483614  298507 retry.go:31] will retry after 254.808839ms: waiting for domain to come up
	I0407 14:05:15.740406  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:15.741306  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:15.741365  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:15.741276  298507 retry.go:31] will retry after 371.195285ms: waiting for domain to come up
	I0407 14:05:16.114800  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:16.115631  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:16.115661  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:16.115610  298507 retry.go:31] will retry after 464.118515ms: waiting for domain to come up
	I0407 14:05:16.581460  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:16.582128  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:16.582184  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:16.582095  298507 retry.go:31] will retry after 528.050928ms: waiting for domain to come up
	I0407 14:05:17.112061  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:17.112738  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:17.112769  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:17.112702  298507 retry.go:31] will retry after 479.768573ms: waiting for domain to come up
	I0407 14:05:17.594322  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:17.594820  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:17.594879  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:17.594799  298507 retry.go:31] will retry after 655.219269ms: waiting for domain to come up
	I0407 14:05:18.251282  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:18.251881  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:18.251934  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:18.251836  298507 retry.go:31] will retry after 914.908353ms: waiting for domain to come up
	I0407 14:05:19.168958  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:19.169523  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:19.169557  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:19.169488  298507 retry.go:31] will retry after 931.980959ms: waiting for domain to come up
	I0407 14:05:20.102809  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:20.103341  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:20.103367  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:20.103314  298507 retry.go:31] will retry after 1.44677769s: waiting for domain to come up
	I0407 14:05:21.552189  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:21.552811  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:21.552837  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:21.552784  298507 retry.go:31] will retry after 1.689221726s: waiting for domain to come up
	I0407 14:05:23.243512  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:23.244165  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:23.244205  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:23.244103  298507 retry.go:31] will retry after 2.473404072s: waiting for domain to come up
	I0407 14:05:25.718843  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:25.719568  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:25.719597  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:25.719460  298507 retry.go:31] will retry after 3.124817186s: waiting for domain to come up
	I0407 14:05:28.846806  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:28.847444  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:28.847481  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:28.847419  298507 retry.go:31] will retry after 4.256893324s: waiting for domain to come up
	I0407 14:05:33.105684  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:33.106244  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:05:33.106271  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:05:33.106186  298507 retry.go:31] will retry after 5.157393176s: waiting for domain to come up
	I0407 14:05:38.265872  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:38.266457  297880 main.go:141] libmachine: (old-k8s-version-405646) found domain IP: 192.168.72.163
	I0407 14:05:38.266482  297880 main.go:141] libmachine: (old-k8s-version-405646) reserving static IP address...
	I0407 14:05:38.266497  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has current primary IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:38.266865  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-405646", mac: "52:54:00:54:a4:af", ip: "192.168.72.163"} in network mk-old-k8s-version-405646
	I0407 14:05:38.349701  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | Getting to WaitForSSH function...
	I0407 14:05:38.349729  297880 main.go:141] libmachine: (old-k8s-version-405646) reserved static IP address 192.168.72.163 for domain old-k8s-version-405646
	I0407 14:05:38.349766  297880 main.go:141] libmachine: (old-k8s-version-405646) waiting for SSH...
	I0407 14:05:38.352870  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:38.353223  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646
	I0407 14:05:38.353247  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find defined IP address of network mk-old-k8s-version-405646 interface with MAC address 52:54:00:54:a4:af
	I0407 14:05:38.353423  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | Using SSH client type: external
	I0407 14:05:38.353445  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | Using SSH private key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa (-rw-------)
	I0407 14:05:38.353491  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 14:05:38.353506  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | About to run SSH command:
	I0407 14:05:38.353518  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | exit 0
	I0407 14:05:38.357647  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | SSH cmd err, output: exit status 255: 
	I0407 14:05:38.357671  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0407 14:05:38.357683  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | command : exit 0
	I0407 14:05:38.357691  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | err     : exit status 255
	I0407 14:05:38.357705  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | output  : 
	I0407 14:05:41.360211  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | Getting to WaitForSSH function...
	I0407 14:05:41.363138  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:41.363614  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:41.363638  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:41.363788  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | Using SSH client type: external
	I0407 14:05:41.363813  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | Using SSH private key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa (-rw-------)
	I0407 14:05:41.363829  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 14:05:41.363834  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | About to run SSH command:
	I0407 14:05:41.363843  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | exit 0
	I0407 14:05:41.496938  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | SSH cmd err, output: <nil>: 
	I0407 14:05:41.497233  297880 main.go:141] libmachine: (old-k8s-version-405646) KVM machine creation complete
	I0407 14:05:41.497614  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetConfigRaw
	I0407 14:05:41.498218  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:05:41.498469  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:05:41.498645  297880 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0407 14:05:41.498667  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetState
	I0407 14:05:41.500243  297880 main.go:141] libmachine: Detecting operating system of created instance...
	I0407 14:05:41.500261  297880 main.go:141] libmachine: Waiting for SSH to be available...
	I0407 14:05:41.500269  297880 main.go:141] libmachine: Getting to WaitForSSH function...
	I0407 14:05:41.500279  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:05:41.503171  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:41.503672  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:41.503714  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:41.503996  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:05:41.504223  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:41.504674  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:41.504883  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:05:41.505098  297880 main.go:141] libmachine: Using SSH client type: native
	I0407 14:05:41.505390  297880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0407 14:05:41.505405  297880 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0407 14:05:41.616851  297880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:05:41.616876  297880 main.go:141] libmachine: Detecting the provisioner...
	I0407 14:05:41.616886  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:05:41.620317  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:41.620659  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:41.620685  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:41.620869  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:05:41.621070  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:41.621205  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:41.621331  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:05:41.621494  297880 main.go:141] libmachine: Using SSH client type: native
	I0407 14:05:41.621781  297880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0407 14:05:41.621793  297880 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0407 14:05:41.745755  297880 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0407 14:05:41.745871  297880 main.go:141] libmachine: found compatible host: buildroot
	I0407 14:05:41.745887  297880 main.go:141] libmachine: Provisioning with buildroot...
	I0407 14:05:41.745900  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetMachineName
	I0407 14:05:41.746188  297880 buildroot.go:166] provisioning hostname "old-k8s-version-405646"
	I0407 14:05:41.746222  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetMachineName
	I0407 14:05:41.746449  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:05:41.749663  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:41.750093  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:41.750139  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:41.750417  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:05:41.750587  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:41.750866  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:41.751065  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:05:41.751283  297880 main.go:141] libmachine: Using SSH client type: native
	I0407 14:05:41.751568  297880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0407 14:05:41.751593  297880 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-405646 && echo "old-k8s-version-405646" | sudo tee /etc/hostname
	I0407 14:05:41.880298  297880 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-405646
	
	I0407 14:05:41.880333  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:05:41.884279  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:41.884765  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:41.884795  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:41.885031  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:05:41.885243  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:41.885440  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:41.885584  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:05:41.885766  297880 main.go:141] libmachine: Using SSH client type: native
	I0407 14:05:41.886087  297880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0407 14:05:41.886123  297880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-405646' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-405646/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-405646' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:05:42.023111  297880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:05:42.023137  297880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20598-242355/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-242355/.minikube}
	I0407 14:05:42.023154  297880 buildroot.go:174] setting up certificates
	I0407 14:05:42.023178  297880 provision.go:84] configureAuth start
	I0407 14:05:42.023187  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetMachineName
	I0407 14:05:42.023425  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetIP
	I0407 14:05:42.026292  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:42.026610  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:42.026644  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:42.026850  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:05:42.029085  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:42.029543  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:42.029562  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:42.029781  297880 provision.go:143] copyHostCerts
	I0407 14:05:42.029856  297880 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem, removing ...
	I0407 14:05:42.029880  297880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem
	I0407 14:05:42.029932  297880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem (1078 bytes)
	I0407 14:05:42.030015  297880 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem, removing ...
	I0407 14:05:42.030022  297880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem
	I0407 14:05:42.030052  297880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem (1123 bytes)
	I0407 14:05:42.030104  297880 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem, removing ...
	I0407 14:05:42.030109  297880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem
	I0407 14:05:42.030126  297880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem (1679 bytes)
	I0407 14:05:42.030169  297880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-405646 san=[127.0.0.1 192.168.72.163 localhost minikube old-k8s-version-405646]
	I0407 14:05:43.074659  297880 provision.go:177] copyRemoteCerts
	I0407 14:05:43.074748  297880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:05:43.074785  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:05:43.077979  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.078467  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:43.078506  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.078710  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:05:43.078964  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:43.079181  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:05:43.079353  297880 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa Username:docker}
	I0407 14:05:43.165227  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 14:05:43.194888  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:05:43.231876  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0407 14:05:43.266858  297880 provision.go:87] duration metric: took 1.243665863s to configureAuth
	I0407 14:05:43.266903  297880 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:05:43.267122  297880 config.go:182] Loaded profile config "old-k8s-version-405646": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 14:05:43.267216  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:05:43.270339  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.270713  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:43.270749  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.271083  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:05:43.271295  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:43.271493  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:43.271631  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:05:43.271799  297880 main.go:141] libmachine: Using SSH client type: native
	I0407 14:05:43.272113  297880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0407 14:05:43.272137  297880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 14:05:43.540502  297880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 14:05:43.540542  297880 main.go:141] libmachine: Checking connection to Docker...
	I0407 14:05:43.540552  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetURL
	I0407 14:05:43.541839  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | using libvirt version 6000000
	I0407 14:05:43.544793  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.545157  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:43.545184  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.545403  297880 main.go:141] libmachine: Docker is up and running!
	I0407 14:05:43.545422  297880 main.go:141] libmachine: Reticulating splines...
	I0407 14:05:43.545431  297880 client.go:171] duration metric: took 30.153370238s to LocalClient.Create
	I0407 14:05:43.545449  297880 start.go:167] duration metric: took 30.15344088s to libmachine.API.Create "old-k8s-version-405646"
	I0407 14:05:43.545460  297880 start.go:293] postStartSetup for "old-k8s-version-405646" (driver="kvm2")
	I0407 14:05:43.545469  297880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:05:43.545488  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:05:43.545715  297880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:05:43.545740  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:05:43.548339  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.548701  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:43.548742  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.548939  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:05:43.549118  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:43.549299  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:05:43.549499  297880 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa Username:docker}
	I0407 14:05:43.641864  297880 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:05:43.647003  297880 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:05:43.647031  297880 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/addons for local assets ...
	I0407 14:05:43.647121  297880 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/files for local assets ...
	I0407 14:05:43.647218  297880 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem -> 2495162.pem in /etc/ssl/certs
	I0407 14:05:43.647329  297880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:05:43.659072  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:05:43.686665  297880 start.go:296] duration metric: took 141.187412ms for postStartSetup
	I0407 14:05:43.686737  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetConfigRaw
	I0407 14:05:43.687555  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetIP
	I0407 14:05:43.690274  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.690573  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:43.690592  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.690960  297880 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/config.json ...
	I0407 14:05:43.691195  297880 start.go:128] duration metric: took 30.321503403s to createHost
	I0407 14:05:43.691248  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:05:43.693364  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.693676  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:43.693703  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.693826  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:05:43.694028  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:43.694210  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:43.694372  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:05:43.694565  297880 main.go:141] libmachine: Using SSH client type: native
	I0407 14:05:43.694755  297880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0407 14:05:43.694765  297880 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 14:05:43.801762  297880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744034743.779881462
	
	I0407 14:05:43.801793  297880 fix.go:216] guest clock: 1744034743.779881462
	I0407 14:05:43.801802  297880 fix.go:229] Guest: 2025-04-07 14:05:43.779881462 +0000 UTC Remote: 2025-04-07 14:05:43.691215699 +0000 UTC m=+85.261647241 (delta=88.665763ms)
	I0407 14:05:43.801859  297880 fix.go:200] guest clock delta is within tolerance: 88.665763ms
	I0407 14:05:43.801871  297880 start.go:83] releasing machines lock for "old-k8s-version-405646", held for 30.432309696s
	I0407 14:05:43.801904  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:05:43.802244  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetIP
	I0407 14:05:43.805468  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.805822  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:43.805871  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.806113  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:05:43.806620  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:05:43.806794  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:05:43.806874  297880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 14:05:43.806918  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:05:43.807025  297880 ssh_runner.go:195] Run: cat /version.json
	I0407 14:05:43.807053  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:05:43.810129  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.810307  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.810503  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:43.810524  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.810719  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:05:43.810787  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:43.810815  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:43.810931  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:43.811029  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:05:43.811106  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:05:43.811149  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:05:43.811213  297880 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa Username:docker}
	I0407 14:05:43.811638  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:05:43.811796  297880 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa Username:docker}
	I0407 14:05:43.891229  297880 ssh_runner.go:195] Run: systemctl --version
	I0407 14:05:43.912168  297880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 14:05:44.077226  297880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 14:05:44.085833  297880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 14:05:44.085894  297880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 14:05:44.106221  297880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 14:05:44.106256  297880 start.go:495] detecting cgroup driver to use...
	I0407 14:05:44.106352  297880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 14:05:44.126888  297880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:05:44.147414  297880 docker.go:217] disabling cri-docker service (if available) ...
	I0407 14:05:44.147493  297880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 14:05:44.164645  297880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 14:05:44.180472  297880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 14:05:44.336287  297880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 14:05:44.514962  297880 docker.go:233] disabling docker service ...
	I0407 14:05:44.515026  297880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 14:05:44.531145  297880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 14:05:44.546568  297880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 14:05:44.697809  297880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 14:05:44.854077  297880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 14:05:44.869925  297880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 14:05:44.891031  297880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0407 14:05:44.891099  297880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:05:44.903604  297880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 14:05:44.903672  297880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:05:44.914837  297880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:05:44.925215  297880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:05:44.938894  297880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 14:05:44.951274  297880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 14:05:44.961382  297880 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 14:05:44.961466  297880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 14:05:44.979064  297880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 14:05:44.989350  297880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:05:45.118707  297880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 14:05:45.224862  297880 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 14:05:45.224957  297880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 14:05:45.231337  297880 start.go:563] Will wait 60s for crictl version
	I0407 14:05:45.231417  297880 ssh_runner.go:195] Run: which crictl
	I0407 14:05:45.236625  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 14:05:45.286793  297880 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 14:05:45.286885  297880 ssh_runner.go:195] Run: crio --version
	I0407 14:05:45.326049  297880 ssh_runner.go:195] Run: crio --version
	I0407 14:05:45.363890  297880 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0407 14:05:45.365047  297880 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetIP
	I0407 14:05:45.368018  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:45.368357  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:05:45.368385  297880 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:05:45.368670  297880 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0407 14:05:45.373631  297880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:05:45.388171  297880 kubeadm.go:883] updating cluster {Name:old-k8s-version-405646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-405646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 14:05:45.388372  297880 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 14:05:45.388469  297880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:05:45.431835  297880 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0407 14:05:45.431920  297880 ssh_runner.go:195] Run: which lz4
	I0407 14:05:45.436991  297880 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 14:05:45.441671  297880 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 14:05:45.441704  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0407 14:05:47.125061  297880 crio.go:462] duration metric: took 1.688112643s to copy over tarball
	I0407 14:05:47.125142  297880 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 14:05:50.280848  297880 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.155670323s)
	I0407 14:05:50.280885  297880 crio.go:469] duration metric: took 3.155787946s to extract the tarball
	I0407 14:05:50.280896  297880 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 14:05:50.326260  297880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:05:50.389584  297880 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0407 14:05:50.389619  297880 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0407 14:05:50.389678  297880 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:05:50.389700  297880 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:05:50.389734  297880 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:05:50.389713  297880 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:05:50.389939  297880 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:05:50.389997  297880 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0407 14:05:50.390051  297880 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0407 14:05:50.390150  297880 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0407 14:05:50.391738  297880 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:05:50.391819  297880 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:05:50.391902  297880 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0407 14:05:50.391820  297880 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:05:50.391991  297880 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0407 14:05:50.392059  297880 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:05:50.392161  297880 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:05:50.392171  297880 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0407 14:05:50.527801  297880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:05:50.528148  297880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0407 14:05:50.529140  297880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:05:50.551644  297880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0407 14:05:50.556268  297880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:05:50.567370  297880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0407 14:05:50.599316  297880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:05:50.654114  297880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0407 14:05:50.654175  297880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:05:50.654316  297880 ssh_runner.go:195] Run: which crictl
	I0407 14:05:50.709423  297880 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0407 14:05:50.709480  297880 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0407 14:05:50.709524  297880 ssh_runner.go:195] Run: which crictl
	I0407 14:05:50.709729  297880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0407 14:05:50.709775  297880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:05:50.709833  297880 ssh_runner.go:195] Run: which crictl
	I0407 14:05:50.752342  297880 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0407 14:05:50.752383  297880 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0407 14:05:50.752395  297880 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0407 14:05:50.752403  297880 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0407 14:05:50.752463  297880 ssh_runner.go:195] Run: which crictl
	I0407 14:05:50.752463  297880 ssh_runner.go:195] Run: which crictl
	I0407 14:05:50.752352  297880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0407 14:05:50.752564  297880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:05:50.752601  297880 ssh_runner.go:195] Run: which crictl
	I0407 14:05:50.782038  297880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0407 14:05:50.782092  297880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:05:50.782140  297880 ssh_runner.go:195] Run: which crictl
	I0407 14:05:50.782249  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:05:50.782311  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 14:05:50.782367  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:05:50.782449  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:05:50.782496  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 14:05:50.782535  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 14:05:50.949698  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 14:05:50.949704  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:05:50.949773  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 14:05:50.963568  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:05:50.963731  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 14:05:50.963789  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:05:50.963873  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:05:51.164721  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 14:05:51.164759  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 14:05:51.164824  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:05:51.181475  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:05:51.181615  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:05:51.181683  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:05:51.181788  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 14:05:51.379240  297880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:05:51.379262  297880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0407 14:05:51.379290  297880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0407 14:05:51.397168  297880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0407 14:05:51.397259  297880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0407 14:05:51.397309  297880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0407 14:05:51.397351  297880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0407 14:05:51.430389  297880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0407 14:05:51.910141  297880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:05:52.067474  297880 cache_images.go:92] duration metric: took 1.677833468s to LoadCachedImages
	W0407 14:05:52.067569  297880 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0407 14:05:52.067588  297880 kubeadm.go:934] updating node { 192.168.72.163 8443 v1.20.0 crio true true} ...
	I0407 14:05:52.067696  297880 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-405646 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 14:05:52.067791  297880 ssh_runner.go:195] Run: crio config
	I0407 14:05:52.135640  297880 cni.go:84] Creating CNI manager for ""
	I0407 14:05:52.135673  297880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:05:52.135688  297880 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 14:05:52.135715  297880 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.163 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-405646 NodeName:old-k8s-version-405646 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0407 14:05:52.135926  297880 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-405646"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 14:05:52.136014  297880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0407 14:05:52.151329  297880 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 14:05:52.151413  297880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 14:05:52.164304  297880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0407 14:05:52.185724  297880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 14:05:52.210529  297880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0407 14:05:52.233908  297880 ssh_runner.go:195] Run: grep 192.168.72.163	control-plane.minikube.internal$ /etc/hosts
	I0407 14:05:52.238209  297880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:05:52.255609  297880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:05:52.386184  297880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:05:52.408618  297880 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646 for IP: 192.168.72.163
	I0407 14:05:52.408644  297880 certs.go:194] generating shared ca certs ...
	I0407 14:05:52.408666  297880 certs.go:226] acquiring lock for ca certs: {Name:mk1da0e2436b5b22d130d00c7c348c272ee34f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:05:52.408843  297880 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key
	I0407 14:05:52.408920  297880 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key
	I0407 14:05:52.408936  297880 certs.go:256] generating profile certs ...
	I0407 14:05:52.409010  297880 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/client.key
	I0407 14:05:52.409028  297880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/client.crt with IP's: []
	I0407 14:05:52.571921  297880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/client.crt ...
	I0407 14:05:52.571954  297880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/client.crt: {Name:mkce07ebd1f4f437bf1027366e35a9f9f7bbbc6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:05:52.572212  297880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/client.key ...
	I0407 14:05:52.572239  297880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/client.key: {Name:mka9fd3bb039b40af1e6524fb2f22f689d130472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:05:52.572405  297880 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.key.f7e6b837
	I0407 14:05:52.572444  297880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.crt.f7e6b837 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.163]
	I0407 14:05:53.065577  297880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.crt.f7e6b837 ...
	I0407 14:05:53.065621  297880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.crt.f7e6b837: {Name:mk456889dfabcb01a7c53e53bb59df6bac0fc6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:05:53.065846  297880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.key.f7e6b837 ...
	I0407 14:05:53.065922  297880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.key.f7e6b837: {Name:mk301ebac00082794d26c7cdc7764b0e13f36243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:05:53.066077  297880 certs.go:381] copying /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.crt.f7e6b837 -> /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.crt
	I0407 14:05:53.066191  297880 certs.go:385] copying /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.key.f7e6b837 -> /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.key
	I0407 14:05:53.066276  297880 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/proxy-client.key
	I0407 14:05:53.066300  297880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/proxy-client.crt with IP's: []
	I0407 14:05:53.256757  297880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/proxy-client.crt ...
	I0407 14:05:53.256791  297880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/proxy-client.crt: {Name:mk9ed7d6c4d0ab58eef3c788f19c757666450da9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:05:53.256938  297880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/proxy-client.key ...
	I0407 14:05:53.256952  297880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/proxy-client.key: {Name:mkc7a6165cc94dc8ba294ef84f43d1df12fd5b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:05:53.257135  297880 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem (1338 bytes)
	W0407 14:05:53.257170  297880 certs.go:480] ignoring /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516_empty.pem, impossibly tiny 0 bytes
	I0407 14:05:53.257180  297880 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 14:05:53.257200  297880 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem (1078 bytes)
	I0407 14:05:53.257226  297880 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem (1123 bytes)
	I0407 14:05:53.257246  297880 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem (1679 bytes)
	I0407 14:05:53.257295  297880 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:05:53.257924  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 14:05:53.286633  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 14:05:53.310683  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 14:05:53.334445  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 14:05:53.363314  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0407 14:05:53.390361  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 14:05:53.417519  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 14:05:53.447692  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0407 14:05:53.481411  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 14:05:53.515235  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem --> /usr/share/ca-certificates/249516.pem (1338 bytes)
	I0407 14:05:53.553038  297880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /usr/share/ca-certificates/2495162.pem (1708 bytes)
	I0407 14:05:53.584440  297880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 14:05:53.605613  297880 ssh_runner.go:195] Run: openssl version
	I0407 14:05:53.612357  297880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 14:05:53.625475  297880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:05:53.631470  297880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:05:53.631542  297880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:05:53.639313  297880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 14:05:53.653656  297880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/249516.pem && ln -fs /usr/share/ca-certificates/249516.pem /etc/ssl/certs/249516.pem"
	I0407 14:05:53.669372  297880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/249516.pem
	I0407 14:05:53.674656  297880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 13:03 /usr/share/ca-certificates/249516.pem
	I0407 14:05:53.674714  297880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/249516.pem
	I0407 14:05:53.680712  297880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/249516.pem /etc/ssl/certs/51391683.0"
	I0407 14:05:53.694247  297880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2495162.pem && ln -fs /usr/share/ca-certificates/2495162.pem /etc/ssl/certs/2495162.pem"
	I0407 14:05:53.707447  297880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2495162.pem
	I0407 14:05:53.712620  297880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 13:03 /usr/share/ca-certificates/2495162.pem
	I0407 14:05:53.712674  297880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2495162.pem
	I0407 14:05:53.719322  297880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2495162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 14:05:53.732551  297880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 14:05:53.737237  297880 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 14:05:53.737297  297880 kubeadm.go:392] StartCluster: {Name:old-k8s-version-405646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-405646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:05:53.737425  297880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 14:05:53.737500  297880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 14:05:53.782330  297880 cri.go:89] found id: ""
	I0407 14:05:53.782412  297880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 14:05:53.793938  297880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 14:05:53.804826  297880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:05:53.817440  297880 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:05:53.817461  297880 kubeadm.go:157] found existing configuration files:
	
	I0407 14:05:53.817511  297880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:05:53.828540  297880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:05:53.828601  297880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:05:53.843285  297880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:05:53.857761  297880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:05:53.857838  297880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:05:53.870695  297880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:05:53.883022  297880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:05:53.883088  297880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:05:53.896484  297880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:05:53.910503  297880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:05:53.910549  297880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:05:53.926215  297880 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 14:05:54.109458  297880 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 14:05:54.109550  297880 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 14:05:54.298839  297880 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 14:05:54.298982  297880 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 14:05:54.299112  297880 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 14:05:54.599163  297880 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:05:54.602557  297880 out.go:235]   - Generating certificates and keys ...
	I0407 14:05:54.602671  297880 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 14:05:54.602765  297880 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 14:05:54.700657  297880 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 14:05:54.986632  297880 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 14:05:55.171704  297880 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 14:05:55.657597  297880 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 14:05:56.317312  297880 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 14:05:56.317520  297880 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-405646] and IPs [192.168.72.163 127.0.0.1 ::1]
	I0407 14:05:56.514169  297880 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 14:05:56.514356  297880 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-405646] and IPs [192.168.72.163 127.0.0.1 ::1]
	I0407 14:05:56.883292  297880 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 14:05:57.044829  297880 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 14:05:57.089562  297880 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 14:05:57.089654  297880 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:05:57.382340  297880 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:05:57.815864  297880 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:05:57.918935  297880 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:05:58.240839  297880 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:05:58.271292  297880 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:05:58.272317  297880 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:05:58.272497  297880 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 14:05:58.447914  297880 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:05:58.449992  297880 out.go:235]   - Booting up control plane ...
	I0407 14:05:58.450157  297880 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:05:58.459766  297880 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:05:58.461003  297880 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:05:58.461952  297880 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:05:58.467179  297880 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 14:06:38.463065  297880 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 14:06:38.463770  297880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:06:38.464003  297880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:06:43.464399  297880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:06:43.464668  297880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:06:53.464816  297880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:06:53.465106  297880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:07:13.464913  297880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:07:13.465190  297880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:07:53.466833  297880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:07:53.467083  297880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:07:53.467107  297880 kubeadm.go:310] 
	I0407 14:07:53.467154  297880 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 14:07:53.467254  297880 kubeadm.go:310] 		timed out waiting for the condition
	I0407 14:07:53.467282  297880 kubeadm.go:310] 
	I0407 14:07:53.467336  297880 kubeadm.go:310] 	This error is likely caused by:
	I0407 14:07:53.467383  297880 kubeadm.go:310] 		- The kubelet is not running
	I0407 14:07:53.467529  297880 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 14:07:53.467550  297880 kubeadm.go:310] 
	I0407 14:07:53.467673  297880 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 14:07:53.467723  297880 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 14:07:53.467778  297880 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 14:07:53.467788  297880 kubeadm.go:310] 
	I0407 14:07:53.467981  297880 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 14:07:53.468138  297880 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 14:07:53.468152  297880 kubeadm.go:310] 
	I0407 14:07:53.468288  297880 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 14:07:53.468408  297880 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 14:07:53.468522  297880 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 14:07:53.468617  297880 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 14:07:53.468636  297880 kubeadm.go:310] 
	I0407 14:07:53.468814  297880 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:07:53.468928  297880 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 14:07:53.469025  297880 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0407 14:07:53.469203  297880 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-405646] and IPs [192.168.72.163 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-405646] and IPs [192.168.72.163 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-405646] and IPs [192.168.72.163 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-405646] and IPs [192.168.72.163 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0407 14:07:53.469259  297880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 14:07:56.227113  297880 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.757821964s)
	I0407 14:07:56.227191  297880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:07:56.241626  297880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:07:56.251980  297880 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:07:56.252000  297880 kubeadm.go:157] found existing configuration files:
	
	I0407 14:07:56.252042  297880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:07:56.261680  297880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:07:56.261744  297880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:07:56.271451  297880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:07:56.280875  297880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:07:56.280929  297880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:07:56.290714  297880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:07:56.300358  297880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:07:56.300412  297880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:07:56.310033  297880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:07:56.319203  297880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:07:56.319255  297880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:07:56.328648  297880 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 14:07:56.533792  297880 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:09:52.821857  297880 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 14:09:52.822007  297880 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0407 14:09:52.824274  297880 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 14:09:52.824403  297880 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 14:09:52.824537  297880 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 14:09:52.824698  297880 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 14:09:52.824858  297880 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 14:09:52.824944  297880 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:09:52.826571  297880 out.go:235]   - Generating certificates and keys ...
	I0407 14:09:52.826685  297880 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 14:09:52.826767  297880 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 14:09:52.826866  297880 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 14:09:52.826948  297880 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 14:09:52.827037  297880 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 14:09:52.827111  297880 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 14:09:52.827194  297880 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 14:09:52.827272  297880 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 14:09:52.827369  297880 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 14:09:52.827456  297880 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 14:09:52.827506  297880 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 14:09:52.827567  297880 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:09:52.827625  297880 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:09:52.827686  297880 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:09:52.827760  297880 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:09:52.827824  297880 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:09:52.827942  297880 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:09:52.828053  297880 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:09:52.828107  297880 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 14:09:52.828191  297880 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:09:52.829514  297880 out.go:235]   - Booting up control plane ...
	I0407 14:09:52.829640  297880 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:09:52.829755  297880 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:09:52.829828  297880 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:09:52.829919  297880 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:09:52.830102  297880 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 14:09:52.830185  297880 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 14:09:52.830318  297880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:09:52.830561  297880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:09:52.830672  297880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:09:52.830886  297880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:09:52.830996  297880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:09:52.831224  297880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:09:52.831327  297880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:09:52.831568  297880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:09:52.831687  297880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:09:52.831971  297880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:09:52.831992  297880 kubeadm.go:310] 
	I0407 14:09:52.832040  297880 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 14:09:52.832095  297880 kubeadm.go:310] 		timed out waiting for the condition
	I0407 14:09:52.832116  297880 kubeadm.go:310] 
	I0407 14:09:52.832166  297880 kubeadm.go:310] 	This error is likely caused by:
	I0407 14:09:52.832225  297880 kubeadm.go:310] 		- The kubelet is not running
	I0407 14:09:52.832365  297880 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 14:09:52.832379  297880 kubeadm.go:310] 
	I0407 14:09:52.832547  297880 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 14:09:52.832613  297880 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 14:09:52.832653  297880 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 14:09:52.832662  297880 kubeadm.go:310] 
	I0407 14:09:52.832799  297880 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 14:09:52.832913  297880 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 14:09:52.832931  297880 kubeadm.go:310] 
	I0407 14:09:52.833096  297880 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 14:09:52.833221  297880 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 14:09:52.833342  297880 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 14:09:52.833452  297880 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 14:09:52.833535  297880 kubeadm.go:310] 
	I0407 14:09:52.833539  297880 kubeadm.go:394] duration metric: took 3m59.096250562s to StartCluster
	I0407 14:09:52.833615  297880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:09:52.833686  297880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:09:52.889275  297880 cri.go:89] found id: ""
	I0407 14:09:52.889312  297880 logs.go:282] 0 containers: []
	W0407 14:09:52.889324  297880 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:09:52.889333  297880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:09:52.889403  297880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:09:52.932483  297880 cri.go:89] found id: ""
	I0407 14:09:52.932507  297880 logs.go:282] 0 containers: []
	W0407 14:09:52.932516  297880 logs.go:284] No container was found matching "etcd"
	I0407 14:09:52.932521  297880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:09:52.932586  297880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:09:52.969035  297880 cri.go:89] found id: ""
	I0407 14:09:52.969064  297880 logs.go:282] 0 containers: []
	W0407 14:09:52.969072  297880 logs.go:284] No container was found matching "coredns"
	I0407 14:09:52.969079  297880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:09:52.969149  297880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:09:53.007411  297880 cri.go:89] found id: ""
	I0407 14:09:53.007444  297880 logs.go:282] 0 containers: []
	W0407 14:09:53.007456  297880 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:09:53.007464  297880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:09:53.007530  297880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:09:53.041456  297880 cri.go:89] found id: ""
	I0407 14:09:53.041482  297880 logs.go:282] 0 containers: []
	W0407 14:09:53.041489  297880 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:09:53.041495  297880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:09:53.041561  297880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:09:53.083495  297880 cri.go:89] found id: ""
	I0407 14:09:53.083525  297880 logs.go:282] 0 containers: []
	W0407 14:09:53.083533  297880 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:09:53.083540  297880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:09:53.083592  297880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:09:53.119029  297880 cri.go:89] found id: ""
	I0407 14:09:53.119072  297880 logs.go:282] 0 containers: []
	W0407 14:09:53.119086  297880 logs.go:284] No container was found matching "kindnet"
	I0407 14:09:53.119100  297880 logs.go:123] Gathering logs for dmesg ...
	I0407 14:09:53.119119  297880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:09:53.133275  297880 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:09:53.133311  297880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:09:53.283102  297880 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:09:53.283136  297880 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:09:53.283155  297880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:09:53.389469  297880 logs.go:123] Gathering logs for container status ...
	I0407 14:09:53.389528  297880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:09:53.429915  297880 logs.go:123] Gathering logs for kubelet ...
	I0407 14:09:53.429951  297880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0407 14:09:53.485426  297880 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0407 14:09:53.485485  297880 out.go:270] * 
	* 
	W0407 14:09:53.485596  297880 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 14:09:53.485619  297880 out.go:270] * 
	* 
	W0407 14:09:53.486448  297880 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 14:09:53.489367  297880 out.go:201] 
	W0407 14:09:53.490550  297880 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 14:09:53.490592  297880 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0407 14:09:53.490623  297880 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0407 14:09:53.493217  297880 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-405646 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 6 (239.149124ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 14:09:53.778109  305401 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-405646" does not appear in /home/jenkins/minikube-integration/20598-242355/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-405646" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (335.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-405646 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-405646 create -f testdata/busybox.yaml: exit status 1 (49.72995ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-405646" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-405646 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 6 (226.282326ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 14:09:54.055918  305440 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-405646" does not appear in /home/jenkins/minikube-integration/20598-242355/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-405646" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 6 (227.051904ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 14:09:54.282888  305470 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-405646" does not appear in /home/jenkins/minikube-integration/20598-242355/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-405646" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (118.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-405646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-405646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m57.754221952s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-405646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-405646 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-405646 describe deploy/metrics-server -n kube-system: exit status 1 (47.831438ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-405646" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-405646 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 6 (235.677595ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 14:11:52.321132  306226 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-405646" does not appear in /home/jenkins/minikube-integration/20598-242355/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-405646" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (118.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (518.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-405646 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0407 14:12:00.209158  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:12:22.941446  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:12:27.912496  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:12:47.434040  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:13:00.332689  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:13:09.354776  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:13:18.329376  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:13:22.631429  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:13:44.863534  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:13:46.032865  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:13:50.335065  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:13:51.955254  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-405646 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m36.388947696s)

                                                
                                                
-- stdout --
	* [old-k8s-version-405646] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-405646" primary control-plane node in "old-k8s-version-405646" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-405646" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 14:11:55.877584  306360 out.go:345] Setting OutFile to fd 1 ...
	I0407 14:11:55.877741  306360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:11:55.877756  306360 out.go:358] Setting ErrFile to fd 2...
	I0407 14:11:55.877763  306360 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:11:55.878002  306360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 14:11:55.878632  306360 out.go:352] Setting JSON to false
	I0407 14:11:55.879779  306360 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":21263,"bootTime":1744013853,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 14:11:55.879843  306360 start.go:139] virtualization: kvm guest
	I0407 14:11:55.882005  306360 out.go:177] * [old-k8s-version-405646] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 14:11:55.883346  306360 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 14:11:55.883376  306360 notify.go:220] Checking for updates...
	I0407 14:11:55.885611  306360 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 14:11:55.886633  306360 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:11:55.887683  306360 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:11:55.888762  306360 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 14:11:55.890239  306360 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 14:11:55.891938  306360 config.go:182] Loaded profile config "old-k8s-version-405646": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 14:11:55.892340  306360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:11:55.892457  306360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:11:55.908291  306360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46857
	I0407 14:11:55.908883  306360 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:11:55.909601  306360 main.go:141] libmachine: Using API Version  1
	I0407 14:11:55.909637  306360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:11:55.910192  306360 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:11:55.910435  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:11:55.912395  306360 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0407 14:11:55.913542  306360 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 14:11:55.913885  306360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:11:55.913934  306360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:11:55.929915  306360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0407 14:11:55.930415  306360 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:11:55.930930  306360 main.go:141] libmachine: Using API Version  1
	I0407 14:11:55.930954  306360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:11:55.931302  306360 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:11:55.931482  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:11:55.972195  306360 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 14:11:55.973551  306360 start.go:297] selected driver: kvm2
	I0407 14:11:55.973569  306360 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-405646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-405646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:11:55.973709  306360 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 14:11:55.974466  306360 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:11:55.974561  306360 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 14:11:55.991091  306360 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 14:11:55.991545  306360 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 14:11:55.991580  306360 cni.go:84] Creating CNI manager for ""
	I0407 14:11:55.991634  306360 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:11:55.991675  306360 start.go:340] cluster config:
	{Name:old-k8s-version-405646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405646 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:11:55.991798  306360 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:11:55.994104  306360 out.go:177] * Starting "old-k8s-version-405646" primary control-plane node in "old-k8s-version-405646" cluster
	I0407 14:11:55.995334  306360 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 14:11:55.995366  306360 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0407 14:11:55.995373  306360 cache.go:56] Caching tarball of preloaded images
	I0407 14:11:55.995446  306360 preload.go:172] Found /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 14:11:55.995457  306360 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0407 14:11:55.995544  306360 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/config.json ...
	I0407 14:11:55.995715  306360 start.go:360] acquireMachinesLock for old-k8s-version-405646: {Name:mkbc0d9211b04d7c322a45485d144adcd6ee59fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:11:55.995754  306360 start.go:364] duration metric: took 23.201µs to acquireMachinesLock for "old-k8s-version-405646"
	I0407 14:11:55.995767  306360 start.go:96] Skipping create...Using existing machine configuration
	I0407 14:11:55.995772  306360 fix.go:54] fixHost starting: 
	I0407 14:11:55.996018  306360 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:11:55.996047  306360 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:11:56.011010  306360 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44415
	I0407 14:11:56.011591  306360 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:11:56.012062  306360 main.go:141] libmachine: Using API Version  1
	I0407 14:11:56.012083  306360 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:11:56.012378  306360 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:11:56.012579  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:11:56.012739  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetState
	I0407 14:11:56.014443  306360 fix.go:112] recreateIfNeeded on old-k8s-version-405646: state=Stopped err=<nil>
	I0407 14:11:56.014470  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	W0407 14:11:56.014611  306360 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 14:11:56.016555  306360 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-405646" ...
	I0407 14:11:56.017608  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .Start
	I0407 14:11:56.017771  306360 main.go:141] libmachine: (old-k8s-version-405646) starting domain...
	I0407 14:11:56.017789  306360 main.go:141] libmachine: (old-k8s-version-405646) ensuring networks are active...
	I0407 14:11:56.018598  306360 main.go:141] libmachine: (old-k8s-version-405646) Ensuring network default is active
	I0407 14:11:56.018991  306360 main.go:141] libmachine: (old-k8s-version-405646) Ensuring network mk-old-k8s-version-405646 is active
	I0407 14:11:56.019366  306360 main.go:141] libmachine: (old-k8s-version-405646) getting domain XML...
	I0407 14:11:56.020165  306360 main.go:141] libmachine: (old-k8s-version-405646) creating domain...
	I0407 14:11:57.299036  306360 main.go:141] libmachine: (old-k8s-version-405646) waiting for IP...
	I0407 14:11:57.299924  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:11:57.300416  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:11:57.300534  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:11:57.300432  306395 retry.go:31] will retry after 193.477495ms: waiting for domain to come up
	I0407 14:11:57.495958  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:11:57.496753  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:11:57.496783  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:11:57.496707  306395 retry.go:31] will retry after 307.236012ms: waiting for domain to come up
	I0407 14:11:57.805214  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:11:57.805826  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:11:57.805867  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:11:57.805771  306395 retry.go:31] will retry after 397.910695ms: waiting for domain to come up
	I0407 14:11:58.205040  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:11:58.205673  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:11:58.205697  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:11:58.205654  306395 retry.go:31] will retry after 399.871416ms: waiting for domain to come up
	I0407 14:11:58.607445  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:11:58.608061  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:11:58.608089  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:11:58.607974  306395 retry.go:31] will retry after 552.338737ms: waiting for domain to come up
	I0407 14:11:59.161614  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:11:59.162159  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:11:59.162188  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:11:59.162105  306395 retry.go:31] will retry after 951.634731ms: waiting for domain to come up
	I0407 14:12:00.115199  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:00.115695  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:12:00.115730  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:12:00.115677  306395 retry.go:31] will retry after 775.027784ms: waiting for domain to come up
	I0407 14:12:00.892724  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:00.893210  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:12:00.893255  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:12:00.893206  306395 retry.go:31] will retry after 916.195998ms: waiting for domain to come up
	I0407 14:12:01.810904  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:01.811448  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:12:01.811476  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:12:01.811419  306395 retry.go:31] will retry after 1.27182594s: waiting for domain to come up
	I0407 14:12:03.084909  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:03.085478  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:12:03.085528  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:12:03.085442  306395 retry.go:31] will retry after 1.879233924s: waiting for domain to come up
	I0407 14:12:04.966367  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:04.966864  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:12:04.966920  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:12:04.966852  306395 retry.go:31] will retry after 2.498986843s: waiting for domain to come up
	I0407 14:12:07.467790  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:07.468417  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:12:07.468460  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:12:07.468384  306395 retry.go:31] will retry after 3.095785247s: waiting for domain to come up
	I0407 14:12:10.566665  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:10.567213  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | unable to find current IP address of domain old-k8s-version-405646 in network mk-old-k8s-version-405646
	I0407 14:12:10.567274  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | I0407 14:12:10.567169  306395 retry.go:31] will retry after 3.212004915s: waiting for domain to come up
	I0407 14:12:13.781588  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:13.782142  306360 main.go:141] libmachine: (old-k8s-version-405646) found domain IP: 192.168.72.163
	I0407 14:12:13.782174  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has current primary IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:13.782182  306360 main.go:141] libmachine: (old-k8s-version-405646) reserving static IP address...
	I0407 14:12:13.782699  306360 main.go:141] libmachine: (old-k8s-version-405646) reserved static IP address 192.168.72.163 for domain old-k8s-version-405646
	I0407 14:12:13.782726  306360 main.go:141] libmachine: (old-k8s-version-405646) waiting for SSH...
	I0407 14:12:13.782766  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "old-k8s-version-405646", mac: "52:54:00:54:a4:af", ip: "192.168.72.163"} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:13.782791  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | skip adding static IP to network mk-old-k8s-version-405646 - found existing host DHCP lease matching {name: "old-k8s-version-405646", mac: "52:54:00:54:a4:af", ip: "192.168.72.163"}
	I0407 14:12:13.782810  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | Getting to WaitForSSH function...
	I0407 14:12:13.784987  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:13.785277  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:13.785315  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:13.785443  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | Using SSH client type: external
	I0407 14:12:13.785487  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | Using SSH private key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa (-rw-------)
	I0407 14:12:13.785540  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 14:12:13.785557  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | About to run SSH command:
	I0407 14:12:13.785572  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | exit 0
	I0407 14:12:13.912523  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | SSH cmd err, output: <nil>: 
	I0407 14:12:13.912907  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetConfigRaw
	I0407 14:12:13.913655  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetIP
	I0407 14:12:13.916485  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:13.916842  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:13.916865  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:13.917120  306360 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/config.json ...
	I0407 14:12:13.917380  306360 machine.go:93] provisionDockerMachine start ...
	I0407 14:12:13.917399  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:12:13.917624  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:12:13.920381  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:13.920816  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:13.920845  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:13.920962  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:12:13.921140  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:13.921308  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:13.921456  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:12:13.921675  306360 main.go:141] libmachine: Using SSH client type: native
	I0407 14:12:13.921967  306360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0407 14:12:13.921982  306360 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 14:12:14.025082  306360 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 14:12:14.025122  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetMachineName
	I0407 14:12:14.025404  306360 buildroot.go:166] provisioning hostname "old-k8s-version-405646"
	I0407 14:12:14.025434  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetMachineName
	I0407 14:12:14.025682  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:12:14.028487  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.028874  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:14.028908  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.029045  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:12:14.029247  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:14.029419  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:14.029582  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:12:14.029743  306360 main.go:141] libmachine: Using SSH client type: native
	I0407 14:12:14.029935  306360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0407 14:12:14.029947  306360 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-405646 && echo "old-k8s-version-405646" | sudo tee /etc/hostname
	I0407 14:12:14.149033  306360 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-405646
	
	I0407 14:12:14.149073  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:12:14.152070  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.152466  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:14.152517  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.152656  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:12:14.152861  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:14.153051  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:14.153203  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:12:14.153411  306360 main.go:141] libmachine: Using SSH client type: native
	I0407 14:12:14.153624  306360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0407 14:12:14.153641  306360 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-405646' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-405646/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-405646' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:12:14.268235  306360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:12:14.268273  306360 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20598-242355/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-242355/.minikube}
	I0407 14:12:14.268337  306360 buildroot.go:174] setting up certificates
	I0407 14:12:14.268351  306360 provision.go:84] configureAuth start
	I0407 14:12:14.268366  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetMachineName
	I0407 14:12:14.268712  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetIP
	I0407 14:12:14.272265  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.272767  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:14.272813  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.272996  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:12:14.276242  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.276706  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:14.276743  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.276959  306360 provision.go:143] copyHostCerts
	I0407 14:12:14.277023  306360 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem, removing ...
	I0407 14:12:14.277034  306360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem
	I0407 14:12:14.277105  306360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem (1123 bytes)
	I0407 14:12:14.277194  306360 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem, removing ...
	I0407 14:12:14.277202  306360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem
	I0407 14:12:14.277227  306360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem (1679 bytes)
	I0407 14:12:14.277276  306360 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem, removing ...
	I0407 14:12:14.277283  306360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem
	I0407 14:12:14.277304  306360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem (1078 bytes)
	I0407 14:12:14.277352  306360 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-405646 san=[127.0.0.1 192.168.72.163 localhost minikube old-k8s-version-405646]
	I0407 14:12:14.490290  306360 provision.go:177] copyRemoteCerts
	I0407 14:12:14.490360  306360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:12:14.490389  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:12:14.493068  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.493427  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:14.493465  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.493651  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:12:14.493859  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:14.494080  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:12:14.494228  306360 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa Username:docker}
	I0407 14:12:14.576178  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 14:12:14.602762  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:12:14.629440  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0407 14:12:14.656255  306360 provision.go:87] duration metric: took 387.888704ms to configureAuth
	I0407 14:12:14.656283  306360 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:12:14.656527  306360 config.go:182] Loaded profile config "old-k8s-version-405646": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0407 14:12:14.656644  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:12:14.659709  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.660050  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:14.660074  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.660342  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:12:14.660569  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:14.660708  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:14.660845  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:12:14.660987  306360 main.go:141] libmachine: Using SSH client type: native
	I0407 14:12:14.661224  306360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0407 14:12:14.661251  306360 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 14:12:14.898159  306360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 14:12:14.898198  306360 machine.go:96] duration metric: took 980.803941ms to provisionDockerMachine
	I0407 14:12:14.898214  306360 start.go:293] postStartSetup for "old-k8s-version-405646" (driver="kvm2")
	I0407 14:12:14.898229  306360 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:12:14.898269  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:12:14.898610  306360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:12:14.898641  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:12:14.901071  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.901403  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:14.901436  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:14.901590  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:12:14.901803  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:14.901954  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:12:14.902100  306360 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa Username:docker}
	I0407 14:12:14.987873  306360 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:12:14.992608  306360 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:12:14.992633  306360 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/addons for local assets ...
	I0407 14:12:14.992688  306360 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/files for local assets ...
	I0407 14:12:14.992756  306360 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem -> 2495162.pem in /etc/ssl/certs
	I0407 14:12:14.992867  306360 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:12:15.003083  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:12:15.029596  306360 start.go:296] duration metric: took 131.362572ms for postStartSetup
	I0407 14:12:15.029648  306360 fix.go:56] duration metric: took 19.033874084s for fixHost
	I0407 14:12:15.029676  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:12:15.032303  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:15.032693  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:15.032724  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:15.032906  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:12:15.033091  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:15.033260  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:15.033434  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:12:15.033613  306360 main.go:141] libmachine: Using SSH client type: native
	I0407 14:12:15.033857  306360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.163 22 <nil> <nil>}
	I0407 14:12:15.033867  306360 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 14:12:15.137646  306360 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744035135.108124206
	
	I0407 14:12:15.137668  306360 fix.go:216] guest clock: 1744035135.108124206
	I0407 14:12:15.137676  306360 fix.go:229] Guest: 2025-04-07 14:12:15.108124206 +0000 UTC Remote: 2025-04-07 14:12:15.029653271 +0000 UTC m=+19.191379059 (delta=78.470935ms)
	I0407 14:12:15.137696  306360 fix.go:200] guest clock delta is within tolerance: 78.470935ms
	I0407 14:12:15.137700  306360 start.go:83] releasing machines lock for "old-k8s-version-405646", held for 19.141938375s
	I0407 14:12:15.137731  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:12:15.138008  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetIP
	I0407 14:12:15.140906  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:15.141315  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:15.141347  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:15.141505  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:12:15.142049  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:12:15.142266  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .DriverName
	I0407 14:12:15.142392  306360 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 14:12:15.142454  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:12:15.142504  306360 ssh_runner.go:195] Run: cat /version.json
	I0407 14:12:15.142531  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHHostname
	I0407 14:12:15.145652  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:15.145684  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:15.146032  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:15.146081  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:15.146122  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:15.146151  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:15.146300  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:12:15.146454  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHPort
	I0407 14:12:15.146532  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:15.146628  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHKeyPath
	I0407 14:12:15.146677  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:12:15.146734  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetSSHUsername
	I0407 14:12:15.146830  306360 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa Username:docker}
	I0407 14:12:15.146877  306360 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/old-k8s-version-405646/id_rsa Username:docker}
	I0407 14:12:15.247710  306360 ssh_runner.go:195] Run: systemctl --version
	I0407 14:12:15.254440  306360 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 14:12:15.410878  306360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 14:12:15.417026  306360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 14:12:15.417100  306360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 14:12:15.433801  306360 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 14:12:15.433836  306360 start.go:495] detecting cgroup driver to use...
	I0407 14:12:15.433918  306360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 14:12:15.450765  306360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:12:15.466113  306360 docker.go:217] disabling cri-docker service (if available) ...
	I0407 14:12:15.466190  306360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 14:12:15.483294  306360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 14:12:15.498507  306360 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 14:12:15.610573  306360 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 14:12:15.765323  306360 docker.go:233] disabling docker service ...
	I0407 14:12:15.765428  306360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 14:12:15.782342  306360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 14:12:15.796522  306360 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 14:12:15.936523  306360 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 14:12:16.068195  306360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 14:12:16.084639  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 14:12:16.103913  306360 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0407 14:12:16.103985  306360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:12:16.114935  306360 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 14:12:16.115011  306360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:12:16.125932  306360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:12:16.138157  306360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:12:16.149464  306360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 14:12:16.162225  306360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 14:12:16.173835  306360 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 14:12:16.173908  306360 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 14:12:16.189001  306360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 14:12:16.200007  306360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:12:16.320490  306360 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 14:12:16.417009  306360 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 14:12:16.417093  306360 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 14:12:16.421927  306360 start.go:563] Will wait 60s for crictl version
	I0407 14:12:16.422020  306360 ssh_runner.go:195] Run: which crictl
	I0407 14:12:16.425873  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 14:12:16.471626  306360 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 14:12:16.471718  306360 ssh_runner.go:195] Run: crio --version
	I0407 14:12:16.501747  306360 ssh_runner.go:195] Run: crio --version
	I0407 14:12:16.533917  306360 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0407 14:12:16.535255  306360 main.go:141] libmachine: (old-k8s-version-405646) Calling .GetIP
	I0407 14:12:16.538665  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:16.539122  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:a4:af", ip: ""} in network mk-old-k8s-version-405646: {Iface:virbr4 ExpiryTime:2025-04-07 15:05:31 +0000 UTC Type:0 Mac:52:54:00:54:a4:af Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-405646 Clientid:01:52:54:00:54:a4:af}
	I0407 14:12:16.539150  306360 main.go:141] libmachine: (old-k8s-version-405646) DBG | domain old-k8s-version-405646 has defined IP address 192.168.72.163 and MAC address 52:54:00:54:a4:af in network mk-old-k8s-version-405646
	I0407 14:12:16.539461  306360 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0407 14:12:16.544609  306360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:12:16.558237  306360 kubeadm.go:883] updating cluster {Name:old-k8s-version-405646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-405646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 14:12:16.558368  306360 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 14:12:16.558420  306360 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:12:16.605114  306360 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0407 14:12:16.605185  306360 ssh_runner.go:195] Run: which lz4
	I0407 14:12:16.609869  306360 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 14:12:16.614474  306360 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 14:12:16.614508  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0407 14:12:18.379307  306360 crio.go:462] duration metric: took 1.769484444s to copy over tarball
	I0407 14:12:18.379379  306360 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 14:12:21.375925  306360 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.996510911s)
	I0407 14:12:21.375976  306360 crio.go:469] duration metric: took 2.996620041s to extract the tarball
	I0407 14:12:21.375987  306360 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 14:12:21.424344  306360 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:12:21.466030  306360 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0407 14:12:21.466066  306360 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0407 14:12:21.466137  306360 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:12:21.466170  306360 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:12:21.466183  306360 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:12:21.466237  306360 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:12:21.466278  306360 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0407 14:12:21.466172  306360 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:12:21.466641  306360 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0407 14:12:21.466811  306360 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0407 14:12:21.469606  306360 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:12:21.469628  306360 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:12:21.469681  306360 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0407 14:12:21.469717  306360 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:12:21.469683  306360 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:12:21.469947  306360 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:12:21.469693  306360 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0407 14:12:21.470094  306360 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0407 14:12:21.619261  306360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:12:21.622031  306360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0407 14:12:21.627759  306360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:12:21.633053  306360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:12:21.640131  306360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0407 14:12:21.649468  306360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:12:21.663704  306360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0407 14:12:21.717652  306360 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0407 14:12:21.717705  306360 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:12:21.717747  306360 ssh_runner.go:195] Run: which crictl
	I0407 14:12:21.755105  306360 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0407 14:12:21.755166  306360 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0407 14:12:21.755221  306360 ssh_runner.go:195] Run: which crictl
	I0407 14:12:21.825039  306360 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0407 14:12:21.825068  306360 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0407 14:12:21.825090  306360 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:12:21.825099  306360 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0407 14:12:21.825120  306360 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:12:21.825133  306360 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0407 14:12:21.825150  306360 ssh_runner.go:195] Run: which crictl
	I0407 14:12:21.825161  306360 ssh_runner.go:195] Run: which crictl
	I0407 14:12:21.825173  306360 ssh_runner.go:195] Run: which crictl
	I0407 14:12:21.825194  306360 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0407 14:12:21.825221  306360 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:12:21.825250  306360 ssh_runner.go:195] Run: which crictl
	I0407 14:12:21.844789  306360 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0407 14:12:21.844845  306360 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0407 14:12:21.844864  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:12:21.844889  306360 ssh_runner.go:195] Run: which crictl
	I0407 14:12:21.844935  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 14:12:21.844986  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:12:21.845016  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:12:21.844991  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:12:21.845051  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 14:12:21.970672  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:12:21.992695  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:12:21.992754  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:12:21.992754  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 14:12:21.992837  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 14:12:21.992953  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 14:12:21.992968  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:12:22.045643  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0407 14:12:22.155223  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0407 14:12:22.161141  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0407 14:12:22.161220  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0407 14:12:22.161323  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 14:12:22.161332  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0407 14:12:22.161405  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0407 14:12:22.188083  306360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0407 14:12:22.311856  306360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0407 14:12:22.311886  306360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0407 14:12:22.315627  306360 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0407 14:12:22.325304  306360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0407 14:12:22.325362  306360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0407 14:12:22.325385  306360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0407 14:12:22.352301  306360 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0407 14:12:23.007813  306360 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:12:23.154284  306360 cache_images.go:92] duration metric: took 1.688197132s to LoadCachedImages
	W0407 14:12:23.154400  306360 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20598-242355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0407 14:12:23.154417  306360 kubeadm.go:934] updating node { 192.168.72.163 8443 v1.20.0 crio true true} ...
	I0407 14:12:23.154582  306360 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-405646 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 14:12:23.154678  306360 ssh_runner.go:195] Run: crio config
	I0407 14:12:23.211344  306360 cni.go:84] Creating CNI manager for ""
	I0407 14:12:23.211376  306360 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:12:23.211393  306360 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 14:12:23.211420  306360 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.163 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-405646 NodeName:old-k8s-version-405646 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0407 14:12:23.211573  306360 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-405646"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 14:12:23.211647  306360 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0407 14:12:23.223834  306360 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 14:12:23.223920  306360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 14:12:23.234996  306360 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0407 14:12:23.254685  306360 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 14:12:23.273528  306360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0407 14:12:23.292843  306360 ssh_runner.go:195] Run: grep 192.168.72.163	control-plane.minikube.internal$ /etc/hosts
	I0407 14:12:23.297714  306360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:12:23.311923  306360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:12:23.456416  306360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:12:23.478108  306360 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646 for IP: 192.168.72.163
	I0407 14:12:23.478134  306360 certs.go:194] generating shared ca certs ...
	I0407 14:12:23.478153  306360 certs.go:226] acquiring lock for ca certs: {Name:mk1da0e2436b5b22d130d00c7c348c272ee34f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:12:23.478366  306360 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key
	I0407 14:12:23.478431  306360 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key
	I0407 14:12:23.478446  306360 certs.go:256] generating profile certs ...
	I0407 14:12:23.478580  306360 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/client.key
	I0407 14:12:23.478655  306360 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.key.f7e6b837
	I0407 14:12:23.478711  306360 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/proxy-client.key
	I0407 14:12:23.478880  306360 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem (1338 bytes)
	W0407 14:12:23.478928  306360 certs.go:480] ignoring /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516_empty.pem, impossibly tiny 0 bytes
	I0407 14:12:23.478945  306360 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 14:12:23.478979  306360 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem (1078 bytes)
	I0407 14:12:23.479014  306360 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem (1123 bytes)
	I0407 14:12:23.479040  306360 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem (1679 bytes)
	I0407 14:12:23.479091  306360 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:12:23.479858  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 14:12:23.553036  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 14:12:23.587117  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 14:12:23.624257  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 14:12:23.663003  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0407 14:12:23.703655  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 14:12:23.730073  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 14:12:23.756981  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/old-k8s-version-405646/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0407 14:12:23.784847  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem --> /usr/share/ca-certificates/249516.pem (1338 bytes)
	I0407 14:12:23.812593  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /usr/share/ca-certificates/2495162.pem (1708 bytes)
	I0407 14:12:23.837016  306360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 14:12:23.864088  306360 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 14:12:23.882189  306360 ssh_runner.go:195] Run: openssl version
	I0407 14:12:23.888804  306360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 14:12:23.900906  306360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:12:23.906049  306360 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:12:23.906110  306360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:12:23.912640  306360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 14:12:23.923952  306360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/249516.pem && ln -fs /usr/share/ca-certificates/249516.pem /etc/ssl/certs/249516.pem"
	I0407 14:12:23.935071  306360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/249516.pem
	I0407 14:12:23.939960  306360 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 13:03 /usr/share/ca-certificates/249516.pem
	I0407 14:12:23.940014  306360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/249516.pem
	I0407 14:12:23.946029  306360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/249516.pem /etc/ssl/certs/51391683.0"
	I0407 14:12:23.957125  306360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2495162.pem && ln -fs /usr/share/ca-certificates/2495162.pem /etc/ssl/certs/2495162.pem"
	I0407 14:12:23.968766  306360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2495162.pem
	I0407 14:12:23.973936  306360 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 13:03 /usr/share/ca-certificates/2495162.pem
	I0407 14:12:23.973990  306360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2495162.pem
	I0407 14:12:23.980070  306360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2495162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 14:12:23.991476  306360 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 14:12:23.996418  306360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 14:12:24.002411  306360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 14:12:24.008137  306360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 14:12:24.014191  306360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 14:12:24.020225  306360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 14:12:24.026223  306360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 14:12:24.032105  306360 kubeadm.go:392] StartCluster: {Name:old-k8s-version-405646 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-405646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.163 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:12:24.032193  306360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 14:12:24.032252  306360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 14:12:24.091021  306360 cri.go:89] found id: ""
	I0407 14:12:24.091087  306360 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 14:12:24.101586  306360 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 14:12:24.101612  306360 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 14:12:24.101670  306360 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 14:12:24.111208  306360 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 14:12:24.112249  306360 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-405646" does not appear in /home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:12:24.112847  306360 kubeconfig.go:62] /home/jenkins/minikube-integration/20598-242355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-405646" cluster setting kubeconfig missing "old-k8s-version-405646" context setting]
	I0407 14:12:24.113729  306360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/kubeconfig: {Name:mkef4208e7f217ec5ec7c15cd00232eac7047b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:12:24.115587  306360 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 14:12:24.124912  306360 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.163
	I0407 14:12:24.124941  306360 kubeadm.go:1160] stopping kube-system containers ...
	I0407 14:12:24.124952  306360 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0407 14:12:24.124994  306360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 14:12:24.163438  306360 cri.go:89] found id: ""
	I0407 14:12:24.163522  306360 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0407 14:12:24.179917  306360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:12:24.189698  306360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:12:24.189724  306360 kubeadm.go:157] found existing configuration files:
	
	I0407 14:12:24.189777  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:12:24.198919  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:12:24.198996  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:12:24.209813  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:12:24.219571  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:12:24.219637  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:12:24.229405  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:12:24.238969  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:12:24.239087  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:12:24.250071  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:12:24.259750  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:12:24.259836  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:12:24.270052  306360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 14:12:24.282680  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:12:24.414373  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:12:25.021499  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:12:25.276954  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:12:25.408020  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:12:25.511109  306360 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:12:25.511183  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:26.011299  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:26.512247  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:27.011416  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:27.512041  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:28.011665  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:28.511457  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:29.011997  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:29.512175  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:30.012181  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:30.511651  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:31.011615  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:31.511578  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:32.012291  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:32.512008  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:33.011527  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:33.511253  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:34.012138  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:34.511905  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:35.012197  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:35.512089  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:36.012031  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:36.511749  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:37.011586  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:37.512250  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:38.011452  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:38.512258  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:39.011740  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:39.511343  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:40.011625  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:40.512014  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:41.012045  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:41.511993  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:42.011426  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:42.512222  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:43.011624  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:43.512226  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:44.011931  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:44.511661  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:45.011539  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:45.511349  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:46.011275  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:46.511902  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:47.011276  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:47.511433  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:48.011616  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:48.512057  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:49.012064  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:49.511717  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:50.011569  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:50.512220  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:51.012215  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:51.511589  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:52.011562  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:52.511388  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:53.011305  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:53.511937  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:54.011505  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:54.511842  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:55.012023  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:55.512073  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:56.012099  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:56.511469  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:57.012147  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:57.511594  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:58.011634  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:58.511294  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:59.011378  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:12:59.511596  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:00.011559  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:00.511960  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:01.011352  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:01.511530  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:02.012169  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:02.512198  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:03.011432  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:03.511580  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:04.011961  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:04.511326  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:05.012075  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:05.511749  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:06.011989  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:06.511641  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:07.011441  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:07.511442  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:08.011623  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:08.511648  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:09.011256  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:09.511573  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:10.011590  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:10.511453  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:11.012257  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:11.511975  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:12.011396  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:12.511306  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:13.011634  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:13.511520  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:14.012283  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:14.511576  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:15.011718  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:15.511865  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:16.011468  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:16.512013  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:17.011385  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:17.511900  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:18.011763  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:18.511623  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:19.012181  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:19.512097  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:20.011448  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:20.511916  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:21.012090  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:21.511894  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:22.011574  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:22.511964  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:23.011995  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:23.511709  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:24.012104  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:24.511359  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:25.012056  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:25.511528  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:13:25.511608  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:13:25.550614  306360 cri.go:89] found id: ""
	I0407 14:13:25.550646  306360 logs.go:282] 0 containers: []
	W0407 14:13:25.550662  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:13:25.550667  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:13:25.550720  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:13:25.590641  306360 cri.go:89] found id: ""
	I0407 14:13:25.590676  306360 logs.go:282] 0 containers: []
	W0407 14:13:25.590684  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:13:25.590690  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:13:25.590750  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:13:25.626364  306360 cri.go:89] found id: ""
	I0407 14:13:25.626394  306360 logs.go:282] 0 containers: []
	W0407 14:13:25.626402  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:13:25.626409  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:13:25.626475  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:13:25.664665  306360 cri.go:89] found id: ""
	I0407 14:13:25.664704  306360 logs.go:282] 0 containers: []
	W0407 14:13:25.664716  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:13:25.664724  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:13:25.664796  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:13:25.702216  306360 cri.go:89] found id: ""
	I0407 14:13:25.702245  306360 logs.go:282] 0 containers: []
	W0407 14:13:25.702252  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:13:25.702258  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:13:25.702311  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:13:25.738050  306360 cri.go:89] found id: ""
	I0407 14:13:25.738080  306360 logs.go:282] 0 containers: []
	W0407 14:13:25.738091  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:13:25.738099  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:13:25.738187  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:13:25.775883  306360 cri.go:89] found id: ""
	I0407 14:13:25.775908  306360 logs.go:282] 0 containers: []
	W0407 14:13:25.775915  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:13:25.775922  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:13:25.775996  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:13:25.812276  306360 cri.go:89] found id: ""
	I0407 14:13:25.812303  306360 logs.go:282] 0 containers: []
	W0407 14:13:25.812314  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:13:25.812329  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:13:25.812343  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:13:25.882580  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:13:25.882619  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:13:25.927140  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:13:25.927166  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:13:25.980666  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:13:25.980701  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:13:25.996925  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:13:25.996953  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:13:26.137004  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:13:28.637983  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:28.652405  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:13:28.652503  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:13:28.689014  306360 cri.go:89] found id: ""
	I0407 14:13:28.689044  306360 logs.go:282] 0 containers: []
	W0407 14:13:28.689052  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:13:28.689058  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:13:28.689122  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:13:28.733120  306360 cri.go:89] found id: ""
	I0407 14:13:28.733149  306360 logs.go:282] 0 containers: []
	W0407 14:13:28.733160  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:13:28.733167  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:13:28.733226  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:13:28.784299  306360 cri.go:89] found id: ""
	I0407 14:13:28.784327  306360 logs.go:282] 0 containers: []
	W0407 14:13:28.784335  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:13:28.784340  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:13:28.784393  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:13:28.820546  306360 cri.go:89] found id: ""
	I0407 14:13:28.820599  306360 logs.go:282] 0 containers: []
	W0407 14:13:28.820607  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:13:28.820613  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:13:28.820680  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:13:28.868293  306360 cri.go:89] found id: ""
	I0407 14:13:28.868332  306360 logs.go:282] 0 containers: []
	W0407 14:13:28.868344  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:13:28.868352  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:13:28.868451  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:13:28.909001  306360 cri.go:89] found id: ""
	I0407 14:13:28.909025  306360 logs.go:282] 0 containers: []
	W0407 14:13:28.909031  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:13:28.909036  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:13:28.909095  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:13:28.950865  306360 cri.go:89] found id: ""
	I0407 14:13:28.950904  306360 logs.go:282] 0 containers: []
	W0407 14:13:28.950916  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:13:28.950924  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:13:28.950987  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:13:28.986713  306360 cri.go:89] found id: ""
	I0407 14:13:28.986741  306360 logs.go:282] 0 containers: []
	W0407 14:13:28.986750  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:13:28.986763  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:13:28.986777  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:13:29.038966  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:13:29.039005  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:13:29.054120  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:13:29.054172  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:13:29.126493  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:13:29.126524  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:13:29.126543  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:13:29.204829  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:13:29.204893  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:13:31.745215  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:31.758881  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:13:31.758964  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:13:31.795921  306360 cri.go:89] found id: ""
	I0407 14:13:31.795954  306360 logs.go:282] 0 containers: []
	W0407 14:13:31.795965  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:13:31.795974  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:13:31.796039  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:13:31.832648  306360 cri.go:89] found id: ""
	I0407 14:13:31.832680  306360 logs.go:282] 0 containers: []
	W0407 14:13:31.832692  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:13:31.832699  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:13:31.832765  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:13:31.868591  306360 cri.go:89] found id: ""
	I0407 14:13:31.868620  306360 logs.go:282] 0 containers: []
	W0407 14:13:31.868629  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:13:31.868639  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:13:31.868707  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:13:31.908169  306360 cri.go:89] found id: ""
	I0407 14:13:31.908197  306360 logs.go:282] 0 containers: []
	W0407 14:13:31.908206  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:13:31.908214  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:13:31.908274  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:13:31.942219  306360 cri.go:89] found id: ""
	I0407 14:13:31.942246  306360 logs.go:282] 0 containers: []
	W0407 14:13:31.942254  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:13:31.942260  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:13:31.942312  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:13:31.978419  306360 cri.go:89] found id: ""
	I0407 14:13:31.978443  306360 logs.go:282] 0 containers: []
	W0407 14:13:31.978450  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:13:31.978455  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:13:31.978505  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:13:32.014175  306360 cri.go:89] found id: ""
	I0407 14:13:32.014203  306360 logs.go:282] 0 containers: []
	W0407 14:13:32.014210  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:13:32.014215  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:13:32.014267  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:13:32.051578  306360 cri.go:89] found id: ""
	I0407 14:13:32.051602  306360 logs.go:282] 0 containers: []
	W0407 14:13:32.051609  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:13:32.051619  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:13:32.051630  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:13:32.132388  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:13:32.132453  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:13:32.172783  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:13:32.172819  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:13:32.223397  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:13:32.223436  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:13:32.238998  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:13:32.239034  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:13:32.316123  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:13:34.816785  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:34.833494  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:13:34.833558  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:13:34.872845  306360 cri.go:89] found id: ""
	I0407 14:13:34.872870  306360 logs.go:282] 0 containers: []
	W0407 14:13:34.872898  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:13:34.872904  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:13:34.872967  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:13:34.908736  306360 cri.go:89] found id: ""
	I0407 14:13:34.908766  306360 logs.go:282] 0 containers: []
	W0407 14:13:34.908777  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:13:34.908784  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:13:34.908859  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:13:34.946969  306360 cri.go:89] found id: ""
	I0407 14:13:34.946994  306360 logs.go:282] 0 containers: []
	W0407 14:13:34.947001  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:13:34.947008  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:13:34.947061  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:13:34.981899  306360 cri.go:89] found id: ""
	I0407 14:13:34.981924  306360 logs.go:282] 0 containers: []
	W0407 14:13:34.981934  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:13:34.981943  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:13:34.981999  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:13:35.015700  306360 cri.go:89] found id: ""
	I0407 14:13:35.015732  306360 logs.go:282] 0 containers: []
	W0407 14:13:35.015744  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:13:35.015751  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:13:35.015805  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:13:35.048060  306360 cri.go:89] found id: ""
	I0407 14:13:35.048091  306360 logs.go:282] 0 containers: []
	W0407 14:13:35.048099  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:13:35.048106  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:13:35.048173  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:13:35.087826  306360 cri.go:89] found id: ""
	I0407 14:13:35.087865  306360 logs.go:282] 0 containers: []
	W0407 14:13:35.087876  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:13:35.087884  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:13:35.087954  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:13:35.131304  306360 cri.go:89] found id: ""
	I0407 14:13:35.131335  306360 logs.go:282] 0 containers: []
	W0407 14:13:35.131344  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:13:35.131354  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:13:35.131367  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:13:35.212189  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:13:35.212211  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:13:35.212229  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:13:35.289026  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:13:35.289061  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:13:35.332162  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:13:35.332192  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:13:35.385193  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:13:35.385230  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:13:37.901105  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:37.916306  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:13:37.916393  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:13:37.951041  306360 cri.go:89] found id: ""
	I0407 14:13:37.951066  306360 logs.go:282] 0 containers: []
	W0407 14:13:37.951075  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:13:37.951082  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:13:37.951153  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:13:37.989512  306360 cri.go:89] found id: ""
	I0407 14:13:37.989540  306360 logs.go:282] 0 containers: []
	W0407 14:13:37.989548  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:13:37.989554  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:13:37.989622  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:13:38.023195  306360 cri.go:89] found id: ""
	I0407 14:13:38.023224  306360 logs.go:282] 0 containers: []
	W0407 14:13:38.023233  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:13:38.023238  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:13:38.023304  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:13:38.061737  306360 cri.go:89] found id: ""
	I0407 14:13:38.061763  306360 logs.go:282] 0 containers: []
	W0407 14:13:38.061771  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:13:38.061777  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:13:38.061839  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:13:38.097729  306360 cri.go:89] found id: ""
	I0407 14:13:38.097768  306360 logs.go:282] 0 containers: []
	W0407 14:13:38.097779  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:13:38.097786  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:13:38.097858  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:13:38.134203  306360 cri.go:89] found id: ""
	I0407 14:13:38.134235  306360 logs.go:282] 0 containers: []
	W0407 14:13:38.134245  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:13:38.134253  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:13:38.134326  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:13:38.169617  306360 cri.go:89] found id: ""
	I0407 14:13:38.169644  306360 logs.go:282] 0 containers: []
	W0407 14:13:38.169652  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:13:38.169658  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:13:38.169706  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:13:38.206249  306360 cri.go:89] found id: ""
	I0407 14:13:38.206282  306360 logs.go:282] 0 containers: []
	W0407 14:13:38.206294  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:13:38.206306  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:13:38.206322  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:13:38.261217  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:13:38.261267  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:13:38.275570  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:13:38.275599  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:13:38.348233  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:13:38.348261  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:13:38.348279  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:13:38.430706  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:13:38.430745  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:13:40.978820  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:40.991749  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:13:40.991818  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:13:41.028149  306360 cri.go:89] found id: ""
	I0407 14:13:41.028190  306360 logs.go:282] 0 containers: []
	W0407 14:13:41.028207  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:13:41.028215  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:13:41.028283  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:13:41.065860  306360 cri.go:89] found id: ""
	I0407 14:13:41.065886  306360 logs.go:282] 0 containers: []
	W0407 14:13:41.065893  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:13:41.065899  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:13:41.065960  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:13:41.098395  306360 cri.go:89] found id: ""
	I0407 14:13:41.098425  306360 logs.go:282] 0 containers: []
	W0407 14:13:41.098433  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:13:41.098439  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:13:41.098492  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:13:41.135495  306360 cri.go:89] found id: ""
	I0407 14:13:41.135523  306360 logs.go:282] 0 containers: []
	W0407 14:13:41.135531  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:13:41.135536  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:13:41.135591  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:13:41.177014  306360 cri.go:89] found id: ""
	I0407 14:13:41.177042  306360 logs.go:282] 0 containers: []
	W0407 14:13:41.177049  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:13:41.177055  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:13:41.177121  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:13:41.212451  306360 cri.go:89] found id: ""
	I0407 14:13:41.212489  306360 logs.go:282] 0 containers: []
	W0407 14:13:41.212499  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:13:41.212505  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:13:41.212561  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:13:41.247813  306360 cri.go:89] found id: ""
	I0407 14:13:41.247846  306360 logs.go:282] 0 containers: []
	W0407 14:13:41.247858  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:13:41.247867  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:13:41.247928  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:13:41.284520  306360 cri.go:89] found id: ""
	I0407 14:13:41.284551  306360 logs.go:282] 0 containers: []
	W0407 14:13:41.284568  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:13:41.284579  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:13:41.284596  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:13:41.337909  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:13:41.337949  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:13:41.353335  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:13:41.353366  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:13:41.419376  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:13:41.419403  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:13:41.419419  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:13:41.500279  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:13:41.500322  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:13:44.043288  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:44.056342  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:13:44.056441  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:13:44.092175  306360 cri.go:89] found id: ""
	I0407 14:13:44.092215  306360 logs.go:282] 0 containers: []
	W0407 14:13:44.092227  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:13:44.092236  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:13:44.092305  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:13:44.126340  306360 cri.go:89] found id: ""
	I0407 14:13:44.126381  306360 logs.go:282] 0 containers: []
	W0407 14:13:44.126394  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:13:44.126403  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:13:44.126474  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:13:44.160372  306360 cri.go:89] found id: ""
	I0407 14:13:44.160401  306360 logs.go:282] 0 containers: []
	W0407 14:13:44.160409  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:13:44.160415  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:13:44.160489  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:13:44.193512  306360 cri.go:89] found id: ""
	I0407 14:13:44.193549  306360 logs.go:282] 0 containers: []
	W0407 14:13:44.193565  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:13:44.193572  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:13:44.193637  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:13:44.228404  306360 cri.go:89] found id: ""
	I0407 14:13:44.228447  306360 logs.go:282] 0 containers: []
	W0407 14:13:44.228459  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:13:44.228465  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:13:44.228533  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:13:44.268919  306360 cri.go:89] found id: ""
	I0407 14:13:44.268950  306360 logs.go:282] 0 containers: []
	W0407 14:13:44.268958  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:13:44.268965  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:13:44.269018  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:13:44.308085  306360 cri.go:89] found id: ""
	I0407 14:13:44.308116  306360 logs.go:282] 0 containers: []
	W0407 14:13:44.308123  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:13:44.308129  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:13:44.308182  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:13:44.345549  306360 cri.go:89] found id: ""
	I0407 14:13:44.345575  306360 logs.go:282] 0 containers: []
	W0407 14:13:44.345582  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:13:44.345594  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:13:44.345608  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:13:44.393482  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:13:44.393514  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:13:44.449042  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:13:44.449095  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:13:44.463489  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:13:44.463523  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:13:44.538893  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:13:44.538922  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:13:44.538937  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:13:47.126169  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:47.139967  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:13:47.140029  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:13:47.173906  306360 cri.go:89] found id: ""
	I0407 14:13:47.173937  306360 logs.go:282] 0 containers: []
	W0407 14:13:47.173945  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:13:47.173951  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:13:47.174018  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:13:47.212980  306360 cri.go:89] found id: ""
	I0407 14:13:47.213009  306360 logs.go:282] 0 containers: []
	W0407 14:13:47.213017  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:13:47.213027  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:13:47.213079  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:13:47.251766  306360 cri.go:89] found id: ""
	I0407 14:13:47.251793  306360 logs.go:282] 0 containers: []
	W0407 14:13:47.251801  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:13:47.251806  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:13:47.251861  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:13:47.291798  306360 cri.go:89] found id: ""
	I0407 14:13:47.291823  306360 logs.go:282] 0 containers: []
	W0407 14:13:47.291832  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:13:47.291840  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:13:47.291896  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:13:47.331373  306360 cri.go:89] found id: ""
	I0407 14:13:47.331408  306360 logs.go:282] 0 containers: []
	W0407 14:13:47.331420  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:13:47.331428  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:13:47.331494  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:13:47.367779  306360 cri.go:89] found id: ""
	I0407 14:13:47.367806  306360 logs.go:282] 0 containers: []
	W0407 14:13:47.367813  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:13:47.367820  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:13:47.367875  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:13:47.403991  306360 cri.go:89] found id: ""
	I0407 14:13:47.404020  306360 logs.go:282] 0 containers: []
	W0407 14:13:47.404030  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:13:47.404038  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:13:47.404100  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:13:47.440066  306360 cri.go:89] found id: ""
	I0407 14:13:47.440107  306360 logs.go:282] 0 containers: []
	W0407 14:13:47.440118  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:13:47.440130  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:13:47.440148  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:13:47.493526  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:13:47.493572  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:13:47.507450  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:13:47.507482  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:13:47.578724  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:13:47.578743  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:13:47.578756  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:13:47.659721  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:13:47.659764  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:13:50.202758  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:50.217032  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:13:50.217141  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:13:50.257632  306360 cri.go:89] found id: ""
	I0407 14:13:50.257663  306360 logs.go:282] 0 containers: []
	W0407 14:13:50.257673  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:13:50.257681  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:13:50.257743  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:13:50.295566  306360 cri.go:89] found id: ""
	I0407 14:13:50.295603  306360 logs.go:282] 0 containers: []
	W0407 14:13:50.295614  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:13:50.295622  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:13:50.295682  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:13:50.330595  306360 cri.go:89] found id: ""
	I0407 14:13:50.330629  306360 logs.go:282] 0 containers: []
	W0407 14:13:50.330641  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:13:50.330648  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:13:50.330722  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:13:50.364813  306360 cri.go:89] found id: ""
	I0407 14:13:50.364869  306360 logs.go:282] 0 containers: []
	W0407 14:13:50.364883  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:13:50.364892  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:13:50.364982  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:13:50.400268  306360 cri.go:89] found id: ""
	I0407 14:13:50.400300  306360 logs.go:282] 0 containers: []
	W0407 14:13:50.400311  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:13:50.400319  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:13:50.400387  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:13:50.436376  306360 cri.go:89] found id: ""
	I0407 14:13:50.436404  306360 logs.go:282] 0 containers: []
	W0407 14:13:50.436412  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:13:50.436418  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:13:50.436488  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:13:50.474517  306360 cri.go:89] found id: ""
	I0407 14:13:50.474551  306360 logs.go:282] 0 containers: []
	W0407 14:13:50.474560  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:13:50.474566  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:13:50.474620  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:13:50.510091  306360 cri.go:89] found id: ""
	I0407 14:13:50.510118  306360 logs.go:282] 0 containers: []
	W0407 14:13:50.510127  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:13:50.510139  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:13:50.510155  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:13:50.523942  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:13:50.523983  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:13:50.595867  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:13:50.595898  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:13:50.595912  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:13:50.676586  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:13:50.676633  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:13:50.721850  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:13:50.721880  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:13:53.276621  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:53.291233  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:13:53.291332  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:13:53.328317  306360 cri.go:89] found id: ""
	I0407 14:13:53.328350  306360 logs.go:282] 0 containers: []
	W0407 14:13:53.328362  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:13:53.328370  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:13:53.328436  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:13:53.362725  306360 cri.go:89] found id: ""
	I0407 14:13:53.362753  306360 logs.go:282] 0 containers: []
	W0407 14:13:53.362761  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:13:53.362766  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:13:53.362829  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:13:53.400430  306360 cri.go:89] found id: ""
	I0407 14:13:53.400465  306360 logs.go:282] 0 containers: []
	W0407 14:13:53.400477  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:13:53.400485  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:13:53.400551  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:13:53.439965  306360 cri.go:89] found id: ""
	I0407 14:13:53.439998  306360 logs.go:282] 0 containers: []
	W0407 14:13:53.440007  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:13:53.440013  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:13:53.440080  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:13:53.476884  306360 cri.go:89] found id: ""
	I0407 14:13:53.476911  306360 logs.go:282] 0 containers: []
	W0407 14:13:53.476918  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:13:53.476924  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:13:53.476988  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:13:53.511903  306360 cri.go:89] found id: ""
	I0407 14:13:53.511932  306360 logs.go:282] 0 containers: []
	W0407 14:13:53.511940  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:13:53.511946  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:13:53.512003  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:13:53.546878  306360 cri.go:89] found id: ""
	I0407 14:13:53.546904  306360 logs.go:282] 0 containers: []
	W0407 14:13:53.546912  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:13:53.546917  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:13:53.546978  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:13:53.584911  306360 cri.go:89] found id: ""
	I0407 14:13:53.584936  306360 logs.go:282] 0 containers: []
	W0407 14:13:53.584945  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:13:53.584953  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:13:53.584969  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:13:53.634693  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:13:53.634729  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:13:53.652627  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:13:53.652659  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:13:53.732686  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:13:53.732710  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:13:53.732725  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:13:53.813470  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:13:53.813519  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:13:56.356894  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:56.370704  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:13:56.370774  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:13:56.409773  306360 cri.go:89] found id: ""
	I0407 14:13:56.409809  306360 logs.go:282] 0 containers: []
	W0407 14:13:56.409821  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:13:56.409829  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:13:56.409905  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:13:56.449253  306360 cri.go:89] found id: ""
	I0407 14:13:56.449286  306360 logs.go:282] 0 containers: []
	W0407 14:13:56.449297  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:13:56.449305  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:13:56.449374  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:13:56.487302  306360 cri.go:89] found id: ""
	I0407 14:13:56.487330  306360 logs.go:282] 0 containers: []
	W0407 14:13:56.487341  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:13:56.487354  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:13:56.487428  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:13:56.527623  306360 cri.go:89] found id: ""
	I0407 14:13:56.527649  306360 logs.go:282] 0 containers: []
	W0407 14:13:56.527660  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:13:56.527666  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:13:56.527730  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:13:56.566895  306360 cri.go:89] found id: ""
	I0407 14:13:56.566927  306360 logs.go:282] 0 containers: []
	W0407 14:13:56.566937  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:13:56.566945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:13:56.567013  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:13:56.618765  306360 cri.go:89] found id: ""
	I0407 14:13:56.618797  306360 logs.go:282] 0 containers: []
	W0407 14:13:56.618807  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:13:56.618815  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:13:56.618877  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:13:56.657323  306360 cri.go:89] found id: ""
	I0407 14:13:56.657356  306360 logs.go:282] 0 containers: []
	W0407 14:13:56.657369  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:13:56.657378  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:13:56.657444  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:13:56.692696  306360 cri.go:89] found id: ""
	I0407 14:13:56.692729  306360 logs.go:282] 0 containers: []
	W0407 14:13:56.692740  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:13:56.692753  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:13:56.692768  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:13:56.747212  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:13:56.747257  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:13:56.761434  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:13:56.761471  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:13:56.832494  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:13:56.832523  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:13:56.832540  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:13:56.913321  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:13:56.913356  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:13:59.460032  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:13:59.473882  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:13:59.473969  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:13:59.511820  306360 cri.go:89] found id: ""
	I0407 14:13:59.511851  306360 logs.go:282] 0 containers: []
	W0407 14:13:59.511859  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:13:59.511865  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:13:59.511930  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:13:59.545136  306360 cri.go:89] found id: ""
	I0407 14:13:59.545169  306360 logs.go:282] 0 containers: []
	W0407 14:13:59.545181  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:13:59.545188  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:13:59.545250  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:13:59.585928  306360 cri.go:89] found id: ""
	I0407 14:13:59.585957  306360 logs.go:282] 0 containers: []
	W0407 14:13:59.585967  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:13:59.585976  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:13:59.586042  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:13:59.626119  306360 cri.go:89] found id: ""
	I0407 14:13:59.626155  306360 logs.go:282] 0 containers: []
	W0407 14:13:59.626168  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:13:59.626176  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:13:59.626245  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:13:59.664752  306360 cri.go:89] found id: ""
	I0407 14:13:59.664783  306360 logs.go:282] 0 containers: []
	W0407 14:13:59.664792  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:13:59.664797  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:13:59.664860  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:13:59.697760  306360 cri.go:89] found id: ""
	I0407 14:13:59.697787  306360 logs.go:282] 0 containers: []
	W0407 14:13:59.697795  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:13:59.697801  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:13:59.697853  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:13:59.733033  306360 cri.go:89] found id: ""
	I0407 14:13:59.733061  306360 logs.go:282] 0 containers: []
	W0407 14:13:59.733069  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:13:59.733076  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:13:59.733131  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:13:59.766077  306360 cri.go:89] found id: ""
	I0407 14:13:59.766105  306360 logs.go:282] 0 containers: []
	W0407 14:13:59.766118  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:13:59.766127  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:13:59.766138  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:13:59.845289  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:13:59.845329  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:13:59.889776  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:13:59.889816  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:13:59.954776  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:13:59.954823  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:13:59.969841  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:13:59.969884  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:00.044170  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:02.545177  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:02.559385  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:02.559505  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:02.592000  306360 cri.go:89] found id: ""
	I0407 14:14:02.592032  306360 logs.go:282] 0 containers: []
	W0407 14:14:02.592043  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:02.592051  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:02.592120  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:02.628333  306360 cri.go:89] found id: ""
	I0407 14:14:02.628370  306360 logs.go:282] 0 containers: []
	W0407 14:14:02.628382  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:02.628389  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:02.628478  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:02.669276  306360 cri.go:89] found id: ""
	I0407 14:14:02.669311  306360 logs.go:282] 0 containers: []
	W0407 14:14:02.669323  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:02.669331  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:02.669404  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:02.713180  306360 cri.go:89] found id: ""
	I0407 14:14:02.713217  306360 logs.go:282] 0 containers: []
	W0407 14:14:02.713226  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:02.713233  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:02.713303  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:02.751385  306360 cri.go:89] found id: ""
	I0407 14:14:02.751410  306360 logs.go:282] 0 containers: []
	W0407 14:14:02.751417  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:02.751423  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:02.751486  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:02.790747  306360 cri.go:89] found id: ""
	I0407 14:14:02.790780  306360 logs.go:282] 0 containers: []
	W0407 14:14:02.790788  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:02.790795  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:02.790859  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:02.836671  306360 cri.go:89] found id: ""
	I0407 14:14:02.836705  306360 logs.go:282] 0 containers: []
	W0407 14:14:02.836717  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:02.836725  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:02.836795  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:02.874110  306360 cri.go:89] found id: ""
	I0407 14:14:02.874145  306360 logs.go:282] 0 containers: []
	W0407 14:14:02.874153  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:02.874164  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:02.874174  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:02.927087  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:02.927125  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:02.941195  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:02.941225  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:03.011259  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:03.011290  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:03.011306  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:03.092321  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:03.092371  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:05.633412  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:05.646983  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:05.647066  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:05.686664  306360 cri.go:89] found id: ""
	I0407 14:14:05.686690  306360 logs.go:282] 0 containers: []
	W0407 14:14:05.686698  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:05.686703  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:05.686756  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:05.726137  306360 cri.go:89] found id: ""
	I0407 14:14:05.726168  306360 logs.go:282] 0 containers: []
	W0407 14:14:05.726179  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:05.726185  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:05.726250  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:05.766367  306360 cri.go:89] found id: ""
	I0407 14:14:05.766395  306360 logs.go:282] 0 containers: []
	W0407 14:14:05.766404  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:05.766410  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:05.766467  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:05.804403  306360 cri.go:89] found id: ""
	I0407 14:14:05.804450  306360 logs.go:282] 0 containers: []
	W0407 14:14:05.804463  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:05.804471  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:05.804530  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:05.844618  306360 cri.go:89] found id: ""
	I0407 14:14:05.844654  306360 logs.go:282] 0 containers: []
	W0407 14:14:05.844664  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:05.844672  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:05.844747  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:05.877747  306360 cri.go:89] found id: ""
	I0407 14:14:05.877776  306360 logs.go:282] 0 containers: []
	W0407 14:14:05.877786  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:05.877794  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:05.877865  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:05.914105  306360 cri.go:89] found id: ""
	I0407 14:14:05.914134  306360 logs.go:282] 0 containers: []
	W0407 14:14:05.914144  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:05.914150  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:05.914204  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:05.951966  306360 cri.go:89] found id: ""
	I0407 14:14:05.952001  306360 logs.go:282] 0 containers: []
	W0407 14:14:05.952012  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:05.952025  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:05.952046  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:06.028131  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:06.028160  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:06.028178  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:06.107500  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:06.107540  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:06.150081  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:06.150120  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:06.200193  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:06.200234  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:08.716562  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:08.731308  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:08.731380  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:08.768923  306360 cri.go:89] found id: ""
	I0407 14:14:08.768959  306360 logs.go:282] 0 containers: []
	W0407 14:14:08.768967  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:08.768973  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:08.769052  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:08.804155  306360 cri.go:89] found id: ""
	I0407 14:14:08.804185  306360 logs.go:282] 0 containers: []
	W0407 14:14:08.804193  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:08.804199  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:08.804271  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:08.844015  306360 cri.go:89] found id: ""
	I0407 14:14:08.844042  306360 logs.go:282] 0 containers: []
	W0407 14:14:08.844049  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:08.844055  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:08.844115  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:08.886289  306360 cri.go:89] found id: ""
	I0407 14:14:08.886316  306360 logs.go:282] 0 containers: []
	W0407 14:14:08.886324  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:08.886329  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:08.886379  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:08.923649  306360 cri.go:89] found id: ""
	I0407 14:14:08.923682  306360 logs.go:282] 0 containers: []
	W0407 14:14:08.923691  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:08.923697  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:08.923753  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:08.962075  306360 cri.go:89] found id: ""
	I0407 14:14:08.962122  306360 logs.go:282] 0 containers: []
	W0407 14:14:08.962134  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:08.962144  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:08.962203  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:09.000369  306360 cri.go:89] found id: ""
	I0407 14:14:09.000405  306360 logs.go:282] 0 containers: []
	W0407 14:14:09.000418  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:09.000436  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:09.000504  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:09.035179  306360 cri.go:89] found id: ""
	I0407 14:14:09.035215  306360 logs.go:282] 0 containers: []
	W0407 14:14:09.035227  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:09.035240  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:09.035254  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:09.086091  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:09.086131  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:09.101155  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:09.101188  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:09.173502  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:09.173533  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:09.173549  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:09.251565  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:09.251606  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:11.819715  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:11.833606  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:11.833667  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:11.874225  306360 cri.go:89] found id: ""
	I0407 14:14:11.874250  306360 logs.go:282] 0 containers: []
	W0407 14:14:11.874257  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:11.874263  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:11.874319  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:11.911819  306360 cri.go:89] found id: ""
	I0407 14:14:11.911846  306360 logs.go:282] 0 containers: []
	W0407 14:14:11.911857  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:11.911865  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:11.911929  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:11.955302  306360 cri.go:89] found id: ""
	I0407 14:14:11.955329  306360 logs.go:282] 0 containers: []
	W0407 14:14:11.955338  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:11.955352  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:11.955429  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:11.993915  306360 cri.go:89] found id: ""
	I0407 14:14:11.993943  306360 logs.go:282] 0 containers: []
	W0407 14:14:11.993953  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:11.993961  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:11.994031  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:12.028198  306360 cri.go:89] found id: ""
	I0407 14:14:12.028226  306360 logs.go:282] 0 containers: []
	W0407 14:14:12.028234  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:12.028240  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:12.028301  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:12.067377  306360 cri.go:89] found id: ""
	I0407 14:14:12.067413  306360 logs.go:282] 0 containers: []
	W0407 14:14:12.067421  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:12.067428  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:12.067481  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:12.101946  306360 cri.go:89] found id: ""
	I0407 14:14:12.102005  306360 logs.go:282] 0 containers: []
	W0407 14:14:12.102018  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:12.102027  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:12.102100  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:12.144580  306360 cri.go:89] found id: ""
	I0407 14:14:12.144616  306360 logs.go:282] 0 containers: []
	W0407 14:14:12.144627  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:12.144641  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:12.144657  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:12.187667  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:12.187698  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:12.241562  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:12.241603  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:12.256175  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:12.256200  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:12.328062  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:12.328088  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:12.328101  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:14.909617  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:14.922263  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:14.922331  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:14.968834  306360 cri.go:89] found id: ""
	I0407 14:14:14.968875  306360 logs.go:282] 0 containers: []
	W0407 14:14:14.968887  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:14.968895  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:14.968969  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:15.017927  306360 cri.go:89] found id: ""
	I0407 14:14:15.017958  306360 logs.go:282] 0 containers: []
	W0407 14:14:15.017967  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:15.017973  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:15.018026  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:15.067205  306360 cri.go:89] found id: ""
	I0407 14:14:15.067241  306360 logs.go:282] 0 containers: []
	W0407 14:14:15.067251  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:15.067259  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:15.067333  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:15.099771  306360 cri.go:89] found id: ""
	I0407 14:14:15.099800  306360 logs.go:282] 0 containers: []
	W0407 14:14:15.099808  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:15.099814  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:15.099871  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:15.134473  306360 cri.go:89] found id: ""
	I0407 14:14:15.134503  306360 logs.go:282] 0 containers: []
	W0407 14:14:15.134514  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:15.134522  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:15.134589  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:15.167608  306360 cri.go:89] found id: ""
	I0407 14:14:15.167646  306360 logs.go:282] 0 containers: []
	W0407 14:14:15.167658  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:15.167666  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:15.167721  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:15.201542  306360 cri.go:89] found id: ""
	I0407 14:14:15.201576  306360 logs.go:282] 0 containers: []
	W0407 14:14:15.201584  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:15.201590  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:15.201652  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:15.239531  306360 cri.go:89] found id: ""
	I0407 14:14:15.239559  306360 logs.go:282] 0 containers: []
	W0407 14:14:15.239567  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:15.239577  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:15.239590  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:15.292082  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:15.292123  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:15.305758  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:15.305787  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:15.373710  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:15.373734  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:15.373748  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:15.458025  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:15.458093  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:17.998821  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:18.012182  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:18.012248  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:18.051168  306360 cri.go:89] found id: ""
	I0407 14:14:18.051200  306360 logs.go:282] 0 containers: []
	W0407 14:14:18.051208  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:18.051215  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:18.051285  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:18.086201  306360 cri.go:89] found id: ""
	I0407 14:14:18.086228  306360 logs.go:282] 0 containers: []
	W0407 14:14:18.086237  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:18.086242  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:18.086306  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:18.120977  306360 cri.go:89] found id: ""
	I0407 14:14:18.121003  306360 logs.go:282] 0 containers: []
	W0407 14:14:18.121011  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:18.121016  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:18.121087  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:18.158015  306360 cri.go:89] found id: ""
	I0407 14:14:18.158055  306360 logs.go:282] 0 containers: []
	W0407 14:14:18.158067  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:18.158075  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:18.158137  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:18.192374  306360 cri.go:89] found id: ""
	I0407 14:14:18.192411  306360 logs.go:282] 0 containers: []
	W0407 14:14:18.192433  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:18.192441  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:18.192510  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:18.229802  306360 cri.go:89] found id: ""
	I0407 14:14:18.229844  306360 logs.go:282] 0 containers: []
	W0407 14:14:18.229853  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:18.229859  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:18.229927  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:18.269492  306360 cri.go:89] found id: ""
	I0407 14:14:18.269517  306360 logs.go:282] 0 containers: []
	W0407 14:14:18.269525  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:18.269531  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:18.269599  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:18.303940  306360 cri.go:89] found id: ""
	I0407 14:14:18.303973  306360 logs.go:282] 0 containers: []
	W0407 14:14:18.303982  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:18.303993  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:18.304004  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:18.354781  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:18.354817  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:18.371131  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:18.371174  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:18.450782  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:18.450817  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:18.450835  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:18.543243  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:18.543285  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:21.093214  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:21.110188  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:21.110261  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:21.153416  306360 cri.go:89] found id: ""
	I0407 14:14:21.153453  306360 logs.go:282] 0 containers: []
	W0407 14:14:21.153466  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:21.153474  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:21.153544  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:21.197035  306360 cri.go:89] found id: ""
	I0407 14:14:21.197065  306360 logs.go:282] 0 containers: []
	W0407 14:14:21.197080  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:21.197087  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:21.197150  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:21.244606  306360 cri.go:89] found id: ""
	I0407 14:14:21.244640  306360 logs.go:282] 0 containers: []
	W0407 14:14:21.244652  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:21.244660  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:21.244722  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:21.280552  306360 cri.go:89] found id: ""
	I0407 14:14:21.280585  306360 logs.go:282] 0 containers: []
	W0407 14:14:21.280598  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:21.280606  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:21.280678  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:21.315310  306360 cri.go:89] found id: ""
	I0407 14:14:21.315351  306360 logs.go:282] 0 containers: []
	W0407 14:14:21.315363  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:21.315376  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:21.315442  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:21.357244  306360 cri.go:89] found id: ""
	I0407 14:14:21.357278  306360 logs.go:282] 0 containers: []
	W0407 14:14:21.357289  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:21.357297  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:21.357366  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:21.407626  306360 cri.go:89] found id: ""
	I0407 14:14:21.407659  306360 logs.go:282] 0 containers: []
	W0407 14:14:21.407670  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:21.407678  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:21.407740  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:21.461052  306360 cri.go:89] found id: ""
	I0407 14:14:21.461088  306360 logs.go:282] 0 containers: []
	W0407 14:14:21.461100  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:21.461113  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:21.461128  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:21.527960  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:21.528013  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:21.546111  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:21.546149  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:21.654323  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:21.654351  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:21.654369  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:21.747620  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:21.747657  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:24.296170  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:24.312050  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:24.312117  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:24.354264  306360 cri.go:89] found id: ""
	I0407 14:14:24.354299  306360 logs.go:282] 0 containers: []
	W0407 14:14:24.354312  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:24.354320  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:24.354399  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:24.413693  306360 cri.go:89] found id: ""
	I0407 14:14:24.413724  306360 logs.go:282] 0 containers: []
	W0407 14:14:24.413735  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:24.413743  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:24.413802  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:24.450661  306360 cri.go:89] found id: ""
	I0407 14:14:24.450695  306360 logs.go:282] 0 containers: []
	W0407 14:14:24.450708  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:24.450716  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:24.450785  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:24.485865  306360 cri.go:89] found id: ""
	I0407 14:14:24.485902  306360 logs.go:282] 0 containers: []
	W0407 14:14:24.485914  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:24.485923  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:24.485994  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:24.529772  306360 cri.go:89] found id: ""
	I0407 14:14:24.529800  306360 logs.go:282] 0 containers: []
	W0407 14:14:24.529811  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:24.529820  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:24.529891  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:24.568579  306360 cri.go:89] found id: ""
	I0407 14:14:24.568607  306360 logs.go:282] 0 containers: []
	W0407 14:14:24.568616  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:24.568622  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:24.568690  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:24.608703  306360 cri.go:89] found id: ""
	I0407 14:14:24.608733  306360 logs.go:282] 0 containers: []
	W0407 14:14:24.608740  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:24.608746  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:24.608808  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:24.645149  306360 cri.go:89] found id: ""
	I0407 14:14:24.645179  306360 logs.go:282] 0 containers: []
	W0407 14:14:24.645190  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:24.645203  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:24.645219  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:24.725262  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:24.725291  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:24.725309  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:24.804419  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:24.804471  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:24.849282  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:24.849322  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:24.911239  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:24.911275  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:27.426526  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:27.441147  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:27.441223  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:27.485365  306360 cri.go:89] found id: ""
	I0407 14:14:27.485396  306360 logs.go:282] 0 containers: []
	W0407 14:14:27.485405  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:27.485417  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:27.485479  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:27.525331  306360 cri.go:89] found id: ""
	I0407 14:14:27.525367  306360 logs.go:282] 0 containers: []
	W0407 14:14:27.525379  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:27.525388  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:27.525455  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:27.564762  306360 cri.go:89] found id: ""
	I0407 14:14:27.564793  306360 logs.go:282] 0 containers: []
	W0407 14:14:27.564804  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:27.564811  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:27.564890  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:27.605275  306360 cri.go:89] found id: ""
	I0407 14:14:27.605302  306360 logs.go:282] 0 containers: []
	W0407 14:14:27.605309  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:27.605316  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:27.605368  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:27.641690  306360 cri.go:89] found id: ""
	I0407 14:14:27.641715  306360 logs.go:282] 0 containers: []
	W0407 14:14:27.641723  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:27.641729  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:27.641792  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:27.685274  306360 cri.go:89] found id: ""
	I0407 14:14:27.685298  306360 logs.go:282] 0 containers: []
	W0407 14:14:27.685306  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:27.685313  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:27.685375  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:27.732194  306360 cri.go:89] found id: ""
	I0407 14:14:27.732223  306360 logs.go:282] 0 containers: []
	W0407 14:14:27.732242  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:27.732250  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:27.732317  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:27.768691  306360 cri.go:89] found id: ""
	I0407 14:14:27.768726  306360 logs.go:282] 0 containers: []
	W0407 14:14:27.768737  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:27.768751  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:27.768767  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:27.839082  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:27.839131  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:27.859056  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:27.859106  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:27.944902  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:27.944927  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:27.944939  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:28.031493  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:28.031532  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:30.578883  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:30.595647  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:30.595746  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:30.637796  306360 cri.go:89] found id: ""
	I0407 14:14:30.637832  306360 logs.go:282] 0 containers: []
	W0407 14:14:30.637843  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:30.637852  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:30.637925  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:30.676229  306360 cri.go:89] found id: ""
	I0407 14:14:30.676269  306360 logs.go:282] 0 containers: []
	W0407 14:14:30.676280  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:30.676289  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:30.676371  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:30.727472  306360 cri.go:89] found id: ""
	I0407 14:14:30.727512  306360 logs.go:282] 0 containers: []
	W0407 14:14:30.727528  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:30.727537  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:30.727605  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:30.777789  306360 cri.go:89] found id: ""
	I0407 14:14:30.777828  306360 logs.go:282] 0 containers: []
	W0407 14:14:30.777841  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:30.777850  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:30.777921  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:30.818219  306360 cri.go:89] found id: ""
	I0407 14:14:30.818251  306360 logs.go:282] 0 containers: []
	W0407 14:14:30.818262  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:30.818270  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:30.818340  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:30.858911  306360 cri.go:89] found id: ""
	I0407 14:14:30.858950  306360 logs.go:282] 0 containers: []
	W0407 14:14:30.858961  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:30.858969  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:30.859040  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:30.897811  306360 cri.go:89] found id: ""
	I0407 14:14:30.897851  306360 logs.go:282] 0 containers: []
	W0407 14:14:30.897864  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:30.897872  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:30.897946  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:30.949270  306360 cri.go:89] found id: ""
	I0407 14:14:30.949306  306360 logs.go:282] 0 containers: []
	W0407 14:14:30.949319  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:30.949333  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:30.949350  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:31.004594  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:31.004638  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:31.024897  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:31.024937  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:31.128459  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:31.128485  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:31.128501  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:31.249249  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:31.249296  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:33.806237  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:33.825128  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:33.825218  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:33.865560  306360 cri.go:89] found id: ""
	I0407 14:14:33.865592  306360 logs.go:282] 0 containers: []
	W0407 14:14:33.865603  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:33.865611  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:33.865677  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:33.902189  306360 cri.go:89] found id: ""
	I0407 14:14:33.902231  306360 logs.go:282] 0 containers: []
	W0407 14:14:33.902246  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:33.902255  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:33.902346  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:33.937705  306360 cri.go:89] found id: ""
	I0407 14:14:33.937752  306360 logs.go:282] 0 containers: []
	W0407 14:14:33.937773  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:33.937781  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:33.937845  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:33.977204  306360 cri.go:89] found id: ""
	I0407 14:14:33.977234  306360 logs.go:282] 0 containers: []
	W0407 14:14:33.977246  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:33.977254  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:33.977313  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:34.017867  306360 cri.go:89] found id: ""
	I0407 14:14:34.017898  306360 logs.go:282] 0 containers: []
	W0407 14:14:34.017909  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:34.017917  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:34.017982  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:34.053135  306360 cri.go:89] found id: ""
	I0407 14:14:34.053168  306360 logs.go:282] 0 containers: []
	W0407 14:14:34.053180  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:34.053188  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:34.053253  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:34.089620  306360 cri.go:89] found id: ""
	I0407 14:14:34.089648  306360 logs.go:282] 0 containers: []
	W0407 14:14:34.089659  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:34.089667  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:34.089726  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:34.129319  306360 cri.go:89] found id: ""
	I0407 14:14:34.129354  306360 logs.go:282] 0 containers: []
	W0407 14:14:34.129366  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:34.129388  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:34.129426  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:34.220260  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:34.220346  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:34.220370  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:34.301776  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:34.301821  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:34.342798  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:34.342841  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:34.396131  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:34.396171  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:36.914756  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:36.930893  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:36.930984  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:36.971884  306360 cri.go:89] found id: ""
	I0407 14:14:36.971911  306360 logs.go:282] 0 containers: []
	W0407 14:14:36.971924  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:36.971930  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:36.971994  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:37.018084  306360 cri.go:89] found id: ""
	I0407 14:14:37.018128  306360 logs.go:282] 0 containers: []
	W0407 14:14:37.018138  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:37.018145  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:37.018213  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:37.065639  306360 cri.go:89] found id: ""
	I0407 14:14:37.065671  306360 logs.go:282] 0 containers: []
	W0407 14:14:37.065682  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:37.065689  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:37.065761  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:37.121609  306360 cri.go:89] found id: ""
	I0407 14:14:37.121642  306360 logs.go:282] 0 containers: []
	W0407 14:14:37.121652  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:37.121660  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:37.121760  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:37.167016  306360 cri.go:89] found id: ""
	I0407 14:14:37.167077  306360 logs.go:282] 0 containers: []
	W0407 14:14:37.167091  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:37.167100  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:37.167208  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:37.209018  306360 cri.go:89] found id: ""
	I0407 14:14:37.209119  306360 logs.go:282] 0 containers: []
	W0407 14:14:37.209147  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:37.209164  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:37.209257  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:37.251180  306360 cri.go:89] found id: ""
	I0407 14:14:37.251213  306360 logs.go:282] 0 containers: []
	W0407 14:14:37.251224  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:37.251232  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:37.251296  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:37.289151  306360 cri.go:89] found id: ""
	I0407 14:14:37.289177  306360 logs.go:282] 0 containers: []
	W0407 14:14:37.289186  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:37.289196  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:37.289207  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:37.363809  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:37.363869  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:37.382032  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:37.382068  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:37.464538  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:37.464577  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:37.464596  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:37.556882  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:37.556931  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:40.139026  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:40.154827  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:40.154910  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:40.195054  306360 cri.go:89] found id: ""
	I0407 14:14:40.195078  306360 logs.go:282] 0 containers: []
	W0407 14:14:40.195088  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:40.195096  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:40.195152  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:40.236563  306360 cri.go:89] found id: ""
	I0407 14:14:40.236593  306360 logs.go:282] 0 containers: []
	W0407 14:14:40.236603  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:40.236609  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:40.236674  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:40.285263  306360 cri.go:89] found id: ""
	I0407 14:14:40.285302  306360 logs.go:282] 0 containers: []
	W0407 14:14:40.285318  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:40.285327  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:40.285400  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:40.326201  306360 cri.go:89] found id: ""
	I0407 14:14:40.326238  306360 logs.go:282] 0 containers: []
	W0407 14:14:40.326250  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:40.326259  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:40.326326  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:40.366270  306360 cri.go:89] found id: ""
	I0407 14:14:40.366298  306360 logs.go:282] 0 containers: []
	W0407 14:14:40.366306  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:40.366312  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:40.366375  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:40.406701  306360 cri.go:89] found id: ""
	I0407 14:14:40.406731  306360 logs.go:282] 0 containers: []
	W0407 14:14:40.406741  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:40.406753  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:40.406822  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:40.445855  306360 cri.go:89] found id: ""
	I0407 14:14:40.445891  306360 logs.go:282] 0 containers: []
	W0407 14:14:40.445903  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:40.445911  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:40.445982  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:40.490969  306360 cri.go:89] found id: ""
	I0407 14:14:40.491009  306360 logs.go:282] 0 containers: []
	W0407 14:14:40.491021  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:40.491034  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:40.491049  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:40.506017  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:40.506051  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:40.588717  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:40.588744  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:40.588762  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:40.673803  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:40.673858  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:40.723457  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:40.723496  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:43.289067  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:43.307941  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:43.308038  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:43.359130  306360 cri.go:89] found id: ""
	I0407 14:14:43.359164  306360 logs.go:282] 0 containers: []
	W0407 14:14:43.359177  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:43.359186  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:43.359260  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:43.395765  306360 cri.go:89] found id: ""
	I0407 14:14:43.395802  306360 logs.go:282] 0 containers: []
	W0407 14:14:43.395813  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:43.395829  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:43.395900  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:43.439375  306360 cri.go:89] found id: ""
	I0407 14:14:43.439409  306360 logs.go:282] 0 containers: []
	W0407 14:14:43.439419  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:43.439426  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:43.439495  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:43.473580  306360 cri.go:89] found id: ""
	I0407 14:14:43.473610  306360 logs.go:282] 0 containers: []
	W0407 14:14:43.473621  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:43.473628  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:43.473694  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:43.523522  306360 cri.go:89] found id: ""
	I0407 14:14:43.523556  306360 logs.go:282] 0 containers: []
	W0407 14:14:43.523565  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:43.523572  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:43.523648  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:43.565535  306360 cri.go:89] found id: ""
	I0407 14:14:43.565567  306360 logs.go:282] 0 containers: []
	W0407 14:14:43.565577  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:43.565585  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:43.565650  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:43.624245  306360 cri.go:89] found id: ""
	I0407 14:14:43.624277  306360 logs.go:282] 0 containers: []
	W0407 14:14:43.624288  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:43.624295  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:43.624365  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:43.661144  306360 cri.go:89] found id: ""
	I0407 14:14:43.661183  306360 logs.go:282] 0 containers: []
	W0407 14:14:43.661195  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:43.661209  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:43.661225  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:43.744610  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:43.744656  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:43.789207  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:43.789236  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:43.840658  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:43.840695  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:43.855497  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:43.855528  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:43.934869  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:46.435125  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:46.449918  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:46.450014  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:46.486189  306360 cri.go:89] found id: ""
	I0407 14:14:46.486225  306360 logs.go:282] 0 containers: []
	W0407 14:14:46.486236  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:46.486244  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:46.486299  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:46.519794  306360 cri.go:89] found id: ""
	I0407 14:14:46.519832  306360 logs.go:282] 0 containers: []
	W0407 14:14:46.519843  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:46.519852  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:46.519923  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:46.555096  306360 cri.go:89] found id: ""
	I0407 14:14:46.555127  306360 logs.go:282] 0 containers: []
	W0407 14:14:46.555138  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:46.555146  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:46.555209  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:46.591984  306360 cri.go:89] found id: ""
	I0407 14:14:46.592010  306360 logs.go:282] 0 containers: []
	W0407 14:14:46.592018  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:46.592024  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:46.592083  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:46.625093  306360 cri.go:89] found id: ""
	I0407 14:14:46.625126  306360 logs.go:282] 0 containers: []
	W0407 14:14:46.625149  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:46.625166  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:46.625242  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:46.660512  306360 cri.go:89] found id: ""
	I0407 14:14:46.660541  306360 logs.go:282] 0 containers: []
	W0407 14:14:46.660549  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:46.660555  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:46.660607  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:46.697730  306360 cri.go:89] found id: ""
	I0407 14:14:46.697767  306360 logs.go:282] 0 containers: []
	W0407 14:14:46.697779  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:46.697788  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:46.697879  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:46.738690  306360 cri.go:89] found id: ""
	I0407 14:14:46.738717  306360 logs.go:282] 0 containers: []
	W0407 14:14:46.738725  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:46.738735  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:46.738745  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:46.751764  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:46.751794  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:46.822530  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:46.822561  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:46.822577  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:46.903380  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:46.903418  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:46.943722  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:46.943755  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:49.499921  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:49.514736  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:49.514843  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:49.551525  306360 cri.go:89] found id: ""
	I0407 14:14:49.551551  306360 logs.go:282] 0 containers: []
	W0407 14:14:49.551561  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:49.551569  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:49.551634  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:49.594712  306360 cri.go:89] found id: ""
	I0407 14:14:49.594749  306360 logs.go:282] 0 containers: []
	W0407 14:14:49.594760  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:49.594767  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:49.594834  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:49.636139  306360 cri.go:89] found id: ""
	I0407 14:14:49.636170  306360 logs.go:282] 0 containers: []
	W0407 14:14:49.636181  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:49.636189  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:49.636255  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:49.673255  306360 cri.go:89] found id: ""
	I0407 14:14:49.673284  306360 logs.go:282] 0 containers: []
	W0407 14:14:49.673295  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:49.673306  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:49.673369  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:49.715383  306360 cri.go:89] found id: ""
	I0407 14:14:49.715411  306360 logs.go:282] 0 containers: []
	W0407 14:14:49.715419  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:49.715425  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:49.715478  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:49.751619  306360 cri.go:89] found id: ""
	I0407 14:14:49.751644  306360 logs.go:282] 0 containers: []
	W0407 14:14:49.751651  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:49.751657  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:49.751710  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:49.788735  306360 cri.go:89] found id: ""
	I0407 14:14:49.788776  306360 logs.go:282] 0 containers: []
	W0407 14:14:49.788788  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:49.788803  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:49.788877  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:49.831617  306360 cri.go:89] found id: ""
	I0407 14:14:49.831645  306360 logs.go:282] 0 containers: []
	W0407 14:14:49.831656  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:49.831668  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:49.831683  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:49.902457  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:49.902509  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:49.918767  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:49.918800  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:50.009571  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:50.009600  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:50.009619  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:50.117941  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:50.117994  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:52.659296  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:52.679112  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:52.679206  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:52.726492  306360 cri.go:89] found id: ""
	I0407 14:14:52.726526  306360 logs.go:282] 0 containers: []
	W0407 14:14:52.726538  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:52.726546  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:52.726613  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:52.767520  306360 cri.go:89] found id: ""
	I0407 14:14:52.767557  306360 logs.go:282] 0 containers: []
	W0407 14:14:52.767570  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:52.767579  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:52.767648  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:52.810852  306360 cri.go:89] found id: ""
	I0407 14:14:52.810888  306360 logs.go:282] 0 containers: []
	W0407 14:14:52.810899  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:52.810916  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:52.810991  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:52.857157  306360 cri.go:89] found id: ""
	I0407 14:14:52.857199  306360 logs.go:282] 0 containers: []
	W0407 14:14:52.857211  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:52.857219  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:52.857295  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:52.898541  306360 cri.go:89] found id: ""
	I0407 14:14:52.898574  306360 logs.go:282] 0 containers: []
	W0407 14:14:52.898584  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:52.898592  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:52.898663  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:52.948071  306360 cri.go:89] found id: ""
	I0407 14:14:52.948106  306360 logs.go:282] 0 containers: []
	W0407 14:14:52.948118  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:52.948126  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:52.948222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:52.997910  306360 cri.go:89] found id: ""
	I0407 14:14:52.997944  306360 logs.go:282] 0 containers: []
	W0407 14:14:52.997958  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:52.997970  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:52.998042  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:53.045644  306360 cri.go:89] found id: ""
	I0407 14:14:53.045683  306360 logs.go:282] 0 containers: []
	W0407 14:14:53.045695  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:53.045708  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:53.045724  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:53.149848  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:53.149872  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:53.149885  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:53.248843  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:53.248882  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:53.302690  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:53.302725  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:53.364496  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:53.364540  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:55.885479  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:55.903236  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:55.903325  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:55.946311  306360 cri.go:89] found id: ""
	I0407 14:14:55.946345  306360 logs.go:282] 0 containers: []
	W0407 14:14:55.946357  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:55.946364  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:55.946424  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:56.003708  306360 cri.go:89] found id: ""
	I0407 14:14:56.003745  306360 logs.go:282] 0 containers: []
	W0407 14:14:56.003759  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:56.003768  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:56.003879  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:56.053989  306360 cri.go:89] found id: ""
	I0407 14:14:56.054024  306360 logs.go:282] 0 containers: []
	W0407 14:14:56.054036  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:56.054044  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:56.054117  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:56.094767  306360 cri.go:89] found id: ""
	I0407 14:14:56.094796  306360 logs.go:282] 0 containers: []
	W0407 14:14:56.094807  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:56.094815  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:56.094873  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:56.131924  306360 cri.go:89] found id: ""
	I0407 14:14:56.131957  306360 logs.go:282] 0 containers: []
	W0407 14:14:56.131968  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:56.131976  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:56.132037  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:56.173395  306360 cri.go:89] found id: ""
	I0407 14:14:56.173422  306360 logs.go:282] 0 containers: []
	W0407 14:14:56.173432  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:56.173438  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:56.173490  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:56.236302  306360 cri.go:89] found id: ""
	I0407 14:14:56.236335  306360 logs.go:282] 0 containers: []
	W0407 14:14:56.236346  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:56.236353  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:56.236409  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:56.272608  306360 cri.go:89] found id: ""
	I0407 14:14:56.272639  306360 logs.go:282] 0 containers: []
	W0407 14:14:56.272647  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:56.272657  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:56.272671  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:56.339641  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:56.339674  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:56.357145  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:56.357173  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:56.431859  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:56.431887  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:56.431906  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:56.525700  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:56.525756  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:14:59.076629  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:14:59.093198  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:14:59.093297  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:14:59.132353  306360 cri.go:89] found id: ""
	I0407 14:14:59.132386  306360 logs.go:282] 0 containers: []
	W0407 14:14:59.132396  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:14:59.132402  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:14:59.132505  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:14:59.179766  306360 cri.go:89] found id: ""
	I0407 14:14:59.179795  306360 logs.go:282] 0 containers: []
	W0407 14:14:59.179806  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:14:59.179814  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:14:59.179885  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:14:59.222711  306360 cri.go:89] found id: ""
	I0407 14:14:59.222745  306360 logs.go:282] 0 containers: []
	W0407 14:14:59.222757  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:14:59.222766  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:14:59.222850  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:14:59.267751  306360 cri.go:89] found id: ""
	I0407 14:14:59.267786  306360 logs.go:282] 0 containers: []
	W0407 14:14:59.267798  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:14:59.267808  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:14:59.267879  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:14:59.302328  306360 cri.go:89] found id: ""
	I0407 14:14:59.302366  306360 logs.go:282] 0 containers: []
	W0407 14:14:59.302378  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:14:59.302386  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:14:59.302455  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:14:59.338509  306360 cri.go:89] found id: ""
	I0407 14:14:59.338537  306360 logs.go:282] 0 containers: []
	W0407 14:14:59.338548  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:14:59.338556  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:14:59.338617  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:14:59.373161  306360 cri.go:89] found id: ""
	I0407 14:14:59.373191  306360 logs.go:282] 0 containers: []
	W0407 14:14:59.373202  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:14:59.373209  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:14:59.373269  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:14:59.414543  306360 cri.go:89] found id: ""
	I0407 14:14:59.414577  306360 logs.go:282] 0 containers: []
	W0407 14:14:59.414641  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:14:59.414656  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:14:59.414677  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:14:59.490051  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:14:59.490098  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:14:59.504861  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:14:59.504893  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:14:59.576987  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:14:59.577019  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:14:59.577037  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:14:59.657841  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:14:59.657881  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:02.199729  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:02.213205  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:02.213281  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:02.260658  306360 cri.go:89] found id: ""
	I0407 14:15:02.260693  306360 logs.go:282] 0 containers: []
	W0407 14:15:02.260705  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:02.260714  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:02.260787  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:02.306065  306360 cri.go:89] found id: ""
	I0407 14:15:02.306099  306360 logs.go:282] 0 containers: []
	W0407 14:15:02.306110  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:02.306117  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:02.306196  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:02.345536  306360 cri.go:89] found id: ""
	I0407 14:15:02.345574  306360 logs.go:282] 0 containers: []
	W0407 14:15:02.345585  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:02.345597  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:02.345666  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:02.395070  306360 cri.go:89] found id: ""
	I0407 14:15:02.395092  306360 logs.go:282] 0 containers: []
	W0407 14:15:02.395100  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:02.395108  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:02.395168  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:02.430722  306360 cri.go:89] found id: ""
	I0407 14:15:02.430749  306360 logs.go:282] 0 containers: []
	W0407 14:15:02.430757  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:02.430765  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:02.430834  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:02.477577  306360 cri.go:89] found id: ""
	I0407 14:15:02.477614  306360 logs.go:282] 0 containers: []
	W0407 14:15:02.477626  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:02.477634  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:02.477703  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:02.519164  306360 cri.go:89] found id: ""
	I0407 14:15:02.519192  306360 logs.go:282] 0 containers: []
	W0407 14:15:02.519203  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:02.519210  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:02.519268  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:02.566407  306360 cri.go:89] found id: ""
	I0407 14:15:02.566447  306360 logs.go:282] 0 containers: []
	W0407 14:15:02.566458  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:02.566471  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:02.566496  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:02.641527  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:02.641583  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:02.660182  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:02.660231  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:02.750261  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:02.750286  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:02.750304  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:02.841129  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:02.841177  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:05.384543  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:05.399330  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:05.399401  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:05.433436  306360 cri.go:89] found id: ""
	I0407 14:15:05.433464  306360 logs.go:282] 0 containers: []
	W0407 14:15:05.433472  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:05.433482  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:05.433536  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:05.479042  306360 cri.go:89] found id: ""
	I0407 14:15:05.479086  306360 logs.go:282] 0 containers: []
	W0407 14:15:05.479098  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:05.479107  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:05.479181  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:05.521447  306360 cri.go:89] found id: ""
	I0407 14:15:05.521482  306360 logs.go:282] 0 containers: []
	W0407 14:15:05.521493  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:05.521502  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:05.521580  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:05.555861  306360 cri.go:89] found id: ""
	I0407 14:15:05.555898  306360 logs.go:282] 0 containers: []
	W0407 14:15:05.555911  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:05.555920  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:05.555988  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:05.592928  306360 cri.go:89] found id: ""
	I0407 14:15:05.592949  306360 logs.go:282] 0 containers: []
	W0407 14:15:05.592963  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:05.592968  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:05.593028  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:05.630061  306360 cri.go:89] found id: ""
	I0407 14:15:05.630090  306360 logs.go:282] 0 containers: []
	W0407 14:15:05.630099  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:05.630106  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:05.630162  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:05.664277  306360 cri.go:89] found id: ""
	I0407 14:15:05.664308  306360 logs.go:282] 0 containers: []
	W0407 14:15:05.664319  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:05.664326  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:05.664388  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:05.703342  306360 cri.go:89] found id: ""
	I0407 14:15:05.703370  306360 logs.go:282] 0 containers: []
	W0407 14:15:05.703380  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:05.703393  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:05.703407  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:05.755925  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:05.755961  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:05.769700  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:05.769727  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:05.845251  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:05.845279  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:05.845297  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:05.923398  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:05.923440  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:08.472845  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:08.485286  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:08.485355  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:08.521213  306360 cri.go:89] found id: ""
	I0407 14:15:08.521246  306360 logs.go:282] 0 containers: []
	W0407 14:15:08.521257  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:08.521266  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:08.521336  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:08.559732  306360 cri.go:89] found id: ""
	I0407 14:15:08.559765  306360 logs.go:282] 0 containers: []
	W0407 14:15:08.559774  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:08.559781  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:08.559844  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:08.599587  306360 cri.go:89] found id: ""
	I0407 14:15:08.599621  306360 logs.go:282] 0 containers: []
	W0407 14:15:08.599632  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:08.599640  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:08.599714  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:08.639437  306360 cri.go:89] found id: ""
	I0407 14:15:08.639467  306360 logs.go:282] 0 containers: []
	W0407 14:15:08.639475  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:08.639483  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:08.639540  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:08.681533  306360 cri.go:89] found id: ""
	I0407 14:15:08.681562  306360 logs.go:282] 0 containers: []
	W0407 14:15:08.681570  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:08.681580  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:08.681640  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:08.729725  306360 cri.go:89] found id: ""
	I0407 14:15:08.729759  306360 logs.go:282] 0 containers: []
	W0407 14:15:08.729769  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:08.729776  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:08.729844  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:08.765881  306360 cri.go:89] found id: ""
	I0407 14:15:08.765909  306360 logs.go:282] 0 containers: []
	W0407 14:15:08.765917  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:08.765923  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:08.765975  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:08.800692  306360 cri.go:89] found id: ""
	I0407 14:15:08.800726  306360 logs.go:282] 0 containers: []
	W0407 14:15:08.800737  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:08.800750  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:08.800768  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:08.849736  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:08.849774  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:08.867514  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:08.867556  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:08.937406  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:08.937431  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:08.937445  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:09.012784  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:09.012828  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:11.553621  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:11.566713  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:11.566788  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:11.601456  306360 cri.go:89] found id: ""
	I0407 14:15:11.601485  306360 logs.go:282] 0 containers: []
	W0407 14:15:11.601493  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:11.601499  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:11.601550  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:11.636979  306360 cri.go:89] found id: ""
	I0407 14:15:11.637009  306360 logs.go:282] 0 containers: []
	W0407 14:15:11.637021  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:11.637029  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:11.637090  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:11.678922  306360 cri.go:89] found id: ""
	I0407 14:15:11.678951  306360 logs.go:282] 0 containers: []
	W0407 14:15:11.678962  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:11.678971  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:11.679029  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:11.715952  306360 cri.go:89] found id: ""
	I0407 14:15:11.715980  306360 logs.go:282] 0 containers: []
	W0407 14:15:11.715989  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:11.715995  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:11.716047  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:11.758289  306360 cri.go:89] found id: ""
	I0407 14:15:11.758317  306360 logs.go:282] 0 containers: []
	W0407 14:15:11.758327  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:11.758351  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:11.758425  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:11.808236  306360 cri.go:89] found id: ""
	I0407 14:15:11.808256  306360 logs.go:282] 0 containers: []
	W0407 14:15:11.808262  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:11.808271  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:11.808316  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:11.856912  306360 cri.go:89] found id: ""
	I0407 14:15:11.856952  306360 logs.go:282] 0 containers: []
	W0407 14:15:11.856965  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:11.856973  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:11.857039  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:11.894832  306360 cri.go:89] found id: ""
	I0407 14:15:11.894868  306360 logs.go:282] 0 containers: []
	W0407 14:15:11.894881  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:11.894895  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:11.894913  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:11.977656  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:11.977679  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:11.977692  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:12.057546  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:12.057589  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:12.100982  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:12.101021  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:12.170287  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:12.170347  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:14.689350  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:14.703003  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:14.703082  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:14.735944  306360 cri.go:89] found id: ""
	I0407 14:15:14.735969  306360 logs.go:282] 0 containers: []
	W0407 14:15:14.735980  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:14.735987  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:14.736044  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:14.769748  306360 cri.go:89] found id: ""
	I0407 14:15:14.769777  306360 logs.go:282] 0 containers: []
	W0407 14:15:14.769785  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:14.769791  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:14.769842  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:14.801872  306360 cri.go:89] found id: ""
	I0407 14:15:14.801904  306360 logs.go:282] 0 containers: []
	W0407 14:15:14.801912  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:14.801918  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:14.801994  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:14.838169  306360 cri.go:89] found id: ""
	I0407 14:15:14.838203  306360 logs.go:282] 0 containers: []
	W0407 14:15:14.838211  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:14.838218  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:14.838273  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:14.870347  306360 cri.go:89] found id: ""
	I0407 14:15:14.870378  306360 logs.go:282] 0 containers: []
	W0407 14:15:14.870385  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:14.870392  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:14.870453  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:14.904035  306360 cri.go:89] found id: ""
	I0407 14:15:14.904070  306360 logs.go:282] 0 containers: []
	W0407 14:15:14.904078  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:14.904085  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:14.904153  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:14.941661  306360 cri.go:89] found id: ""
	I0407 14:15:14.941691  306360 logs.go:282] 0 containers: []
	W0407 14:15:14.941699  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:14.941706  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:14.941760  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:14.980456  306360 cri.go:89] found id: ""
	I0407 14:15:14.980485  306360 logs.go:282] 0 containers: []
	W0407 14:15:14.980493  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:14.980503  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:14.980513  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:15.059109  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:15.059153  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:15.097787  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:15.097818  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:15.145964  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:15.146007  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:15.160601  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:15.160638  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:15.228188  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:17.728590  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:17.744769  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:17.744840  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:17.781094  306360 cri.go:89] found id: ""
	I0407 14:15:17.781124  306360 logs.go:282] 0 containers: []
	W0407 14:15:17.781136  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:17.781144  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:17.781206  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:17.825315  306360 cri.go:89] found id: ""
	I0407 14:15:17.825340  306360 logs.go:282] 0 containers: []
	W0407 14:15:17.825348  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:17.825353  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:17.825400  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:17.867095  306360 cri.go:89] found id: ""
	I0407 14:15:17.867122  306360 logs.go:282] 0 containers: []
	W0407 14:15:17.867132  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:17.867142  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:17.867199  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:17.905175  306360 cri.go:89] found id: ""
	I0407 14:15:17.905204  306360 logs.go:282] 0 containers: []
	W0407 14:15:17.905303  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:17.905318  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:17.905380  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:17.938724  306360 cri.go:89] found id: ""
	I0407 14:15:17.938759  306360 logs.go:282] 0 containers: []
	W0407 14:15:17.938770  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:17.938779  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:17.938846  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:17.973854  306360 cri.go:89] found id: ""
	I0407 14:15:17.973887  306360 logs.go:282] 0 containers: []
	W0407 14:15:17.973898  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:17.973906  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:17.973974  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:18.011170  306360 cri.go:89] found id: ""
	I0407 14:15:18.011197  306360 logs.go:282] 0 containers: []
	W0407 14:15:18.011207  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:18.011215  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:18.011273  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:18.047847  306360 cri.go:89] found id: ""
	I0407 14:15:18.047870  306360 logs.go:282] 0 containers: []
	W0407 14:15:18.047882  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:18.047892  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:18.047906  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:18.088872  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:18.088894  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:18.139471  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:18.139504  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:18.153657  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:18.153682  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:18.223860  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:18.223885  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:18.223901  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:20.812557  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:20.825170  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:20.825233  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:20.860155  306360 cri.go:89] found id: ""
	I0407 14:15:20.860188  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.860199  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:20.860210  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:20.860270  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:20.896637  306360 cri.go:89] found id: ""
	I0407 14:15:20.896666  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.896673  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:20.896679  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:20.896737  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:20.937796  306360 cri.go:89] found id: ""
	I0407 14:15:20.937828  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.937837  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:20.937843  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:20.937896  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:20.983104  306360 cri.go:89] found id: ""
	I0407 14:15:20.983138  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.983149  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:20.983157  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:20.983222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:21.024555  306360 cri.go:89] found id: ""
	I0407 14:15:21.024591  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.024602  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:21.024609  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:21.024685  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:21.068400  306360 cri.go:89] found id: ""
	I0407 14:15:21.068484  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.068495  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:21.068502  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:21.068572  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:21.107962  306360 cri.go:89] found id: ""
	I0407 14:15:21.107990  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.107998  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:21.108004  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:21.108067  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:21.147955  306360 cri.go:89] found id: ""
	I0407 14:15:21.147981  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.147989  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:21.147999  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:21.148010  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:21.164790  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:21.164818  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:21.236045  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:21.236068  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:21.236081  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:21.313784  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:21.313821  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:21.357183  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:21.357215  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:23.907736  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:23.921413  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:23.921481  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:23.959486  306360 cri.go:89] found id: ""
	I0407 14:15:23.959513  306360 logs.go:282] 0 containers: []
	W0407 14:15:23.959520  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:23.959526  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:23.959585  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:23.992912  306360 cri.go:89] found id: ""
	I0407 14:15:23.992938  306360 logs.go:282] 0 containers: []
	W0407 14:15:23.992946  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:23.992952  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:23.993010  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:24.024279  306360 cri.go:89] found id: ""
	I0407 14:15:24.024308  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.024316  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:24.024323  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:24.024376  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:24.062320  306360 cri.go:89] found id: ""
	I0407 14:15:24.062353  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.062362  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:24.062371  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:24.062432  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:24.122748  306360 cri.go:89] found id: ""
	I0407 14:15:24.122774  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.122782  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:24.122787  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:24.122857  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:24.156773  306360 cri.go:89] found id: ""
	I0407 14:15:24.156803  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.156814  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:24.156831  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:24.156899  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:24.192903  306360 cri.go:89] found id: ""
	I0407 14:15:24.192940  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.192952  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:24.192960  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:24.193017  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:24.228041  306360 cri.go:89] found id: ""
	I0407 14:15:24.228081  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.228093  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:24.228105  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:24.228122  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:24.276177  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:24.276212  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:24.289668  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:24.289701  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:24.356935  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:24.356962  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:24.356981  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:24.442103  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:24.442140  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:26.983553  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:26.996033  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:26.996104  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:27.029665  306360 cri.go:89] found id: ""
	I0407 14:15:27.029692  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.029700  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:27.029705  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:27.029756  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:27.069962  306360 cri.go:89] found id: ""
	I0407 14:15:27.069992  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.070000  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:27.070009  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:27.070074  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:27.112142  306360 cri.go:89] found id: ""
	I0407 14:15:27.112174  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.112182  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:27.112188  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:27.112240  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:27.152647  306360 cri.go:89] found id: ""
	I0407 14:15:27.152675  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.152685  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:27.152691  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:27.152743  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:27.188973  306360 cri.go:89] found id: ""
	I0407 14:15:27.189004  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.189015  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:27.189023  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:27.189099  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:27.228054  306360 cri.go:89] found id: ""
	I0407 14:15:27.228085  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.228095  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:27.228102  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:27.228164  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:27.262089  306360 cri.go:89] found id: ""
	I0407 14:15:27.262121  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.262131  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:27.262152  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:27.262222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:27.298902  306360 cri.go:89] found id: ""
	I0407 14:15:27.298939  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.298951  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:27.298969  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:27.298988  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:27.338649  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:27.338676  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:27.388606  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:27.388653  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:27.403449  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:27.403491  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:27.469414  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:27.469448  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:27.469467  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:30.052698  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:30.071454  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:30.071529  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:30.104690  306360 cri.go:89] found id: ""
	I0407 14:15:30.104723  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.104733  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:30.104741  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:30.104805  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:30.139611  306360 cri.go:89] found id: ""
	I0407 14:15:30.139641  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.139651  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:30.139658  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:30.139724  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:30.173648  306360 cri.go:89] found id: ""
	I0407 14:15:30.173679  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.173691  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:30.173702  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:30.173766  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:30.207015  306360 cri.go:89] found id: ""
	I0407 14:15:30.207045  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.207055  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:30.207062  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:30.207141  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:30.242602  306360 cri.go:89] found id: ""
	I0407 14:15:30.242631  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.242642  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:30.242647  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:30.242698  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:30.275775  306360 cri.go:89] found id: ""
	I0407 14:15:30.275811  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.275824  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:30.275834  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:30.275906  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:30.310674  306360 cri.go:89] found id: ""
	I0407 14:15:30.310710  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.310722  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:30.310734  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:30.310803  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:30.342628  306360 cri.go:89] found id: ""
	I0407 14:15:30.342666  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.342677  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:30.342690  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:30.342704  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:30.390588  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:30.390625  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:30.405143  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:30.405179  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:30.473557  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:30.473590  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:30.473607  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:30.555915  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:30.555961  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:33.094714  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:33.107818  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:33.107883  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:33.147279  306360 cri.go:89] found id: ""
	I0407 14:15:33.147310  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.147317  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:33.147323  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:33.147374  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:33.182866  306360 cri.go:89] found id: ""
	I0407 14:15:33.182895  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.182903  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:33.182909  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:33.182962  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:33.219845  306360 cri.go:89] found id: ""
	I0407 14:15:33.219881  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.219894  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:33.219903  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:33.219980  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:33.255785  306360 cri.go:89] found id: ""
	I0407 14:15:33.255818  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.255832  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:33.255838  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:33.255888  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:33.296287  306360 cri.go:89] found id: ""
	I0407 14:15:33.296320  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.296331  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:33.296339  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:33.296406  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:33.333123  306360 cri.go:89] found id: ""
	I0407 14:15:33.333156  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.333167  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:33.333174  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:33.333244  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:33.367813  306360 cri.go:89] found id: ""
	I0407 14:15:33.367844  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.367855  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:33.367862  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:33.367930  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:33.401927  306360 cri.go:89] found id: ""
	I0407 14:15:33.401957  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.401964  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:33.401974  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:33.401985  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:33.464350  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:33.464390  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:33.478831  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:33.478866  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:33.554322  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:33.554352  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:33.554370  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:33.632339  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:33.632381  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:36.177635  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:36.191117  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:36.191215  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:36.229342  306360 cri.go:89] found id: ""
	I0407 14:15:36.229373  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.229384  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:36.229391  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:36.229461  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:36.269119  306360 cri.go:89] found id: ""
	I0407 14:15:36.269151  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.269162  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:36.269170  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:36.269236  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:36.312510  306360 cri.go:89] found id: ""
	I0407 14:15:36.312544  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.312556  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:36.312563  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:36.312632  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:36.346706  306360 cri.go:89] found id: ""
	I0407 14:15:36.346741  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.346753  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:36.346762  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:36.346830  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:36.382862  306360 cri.go:89] found id: ""
	I0407 14:15:36.382899  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.382912  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:36.382920  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:36.382989  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:36.424287  306360 cri.go:89] found id: ""
	I0407 14:15:36.424318  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.424329  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:36.424337  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:36.424407  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:36.473843  306360 cri.go:89] found id: ""
	I0407 14:15:36.473891  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.473906  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:36.473916  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:36.474002  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:36.532647  306360 cri.go:89] found id: ""
	I0407 14:15:36.532685  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.532697  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:36.532711  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:36.532727  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:36.599779  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:36.599820  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:36.614047  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:36.614082  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:36.692006  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:36.692030  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:36.692044  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:36.782142  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:36.782196  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:39.320544  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:39.333558  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:39.333630  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:39.367209  306360 cri.go:89] found id: ""
	I0407 14:15:39.367244  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.367255  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:39.367264  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:39.367338  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:39.406298  306360 cri.go:89] found id: ""
	I0407 14:15:39.406326  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.406335  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:39.406342  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:39.406407  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:39.440090  306360 cri.go:89] found id: ""
	I0407 14:15:39.440118  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.440128  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:39.440134  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:39.440197  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:39.473483  306360 cri.go:89] found id: ""
	I0407 14:15:39.473514  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.473527  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:39.473534  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:39.473602  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:39.505571  306360 cri.go:89] found id: ""
	I0407 14:15:39.505599  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.505607  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:39.505613  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:39.505676  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:39.538929  306360 cri.go:89] found id: ""
	I0407 14:15:39.538961  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.538971  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:39.538980  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:39.539045  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:39.572047  306360 cri.go:89] found id: ""
	I0407 14:15:39.572078  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.572089  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:39.572097  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:39.572163  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:39.605781  306360 cri.go:89] found id: ""
	I0407 14:15:39.605812  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.605854  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:39.605868  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:39.605885  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:39.684887  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:39.684931  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:39.725609  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:39.725639  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:39.776592  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:39.776634  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:39.792687  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:39.792719  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:39.859832  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:42.361106  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:42.374378  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:42.374461  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:42.409267  306360 cri.go:89] found id: ""
	I0407 14:15:42.409296  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.409304  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:42.409309  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:42.409361  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:42.442512  306360 cri.go:89] found id: ""
	I0407 14:15:42.442540  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.442548  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:42.442554  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:42.442603  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:42.476016  306360 cri.go:89] found id: ""
	I0407 14:15:42.476044  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.476055  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:42.476063  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:42.476127  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:42.507103  306360 cri.go:89] found id: ""
	I0407 14:15:42.507138  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.507145  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:42.507151  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:42.507205  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:42.543140  306360 cri.go:89] found id: ""
	I0407 14:15:42.543167  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.543178  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:42.543185  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:42.543260  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:42.583718  306360 cri.go:89] found id: ""
	I0407 14:15:42.583749  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.583756  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:42.583764  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:42.583826  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:42.617614  306360 cri.go:89] found id: ""
	I0407 14:15:42.617649  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.617660  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:42.617668  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:42.617736  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:42.652193  306360 cri.go:89] found id: ""
	I0407 14:15:42.652220  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.652227  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:42.652237  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:42.652250  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:42.700778  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:42.700817  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:42.713926  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:42.713958  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:42.781552  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:42.781577  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:42.781590  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:42.857460  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:42.857502  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:45.397689  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:45.416022  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:45.416089  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:45.457038  306360 cri.go:89] found id: ""
	I0407 14:15:45.457078  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.457089  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:45.457097  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:45.457168  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:45.491527  306360 cri.go:89] found id: ""
	I0407 14:15:45.491559  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.491570  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:45.491578  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:45.491647  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:45.524296  306360 cri.go:89] found id: ""
	I0407 14:15:45.524333  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.524344  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:45.524352  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:45.524416  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:45.562418  306360 cri.go:89] found id: ""
	I0407 14:15:45.562450  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.562461  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:45.562469  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:45.562537  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:45.601384  306360 cri.go:89] found id: ""
	I0407 14:15:45.601409  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.601417  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:45.601423  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:45.601471  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:45.638899  306360 cri.go:89] found id: ""
	I0407 14:15:45.638924  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.638933  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:45.638939  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:45.639005  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:45.675994  306360 cri.go:89] found id: ""
	I0407 14:15:45.676031  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.676047  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:45.676064  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:45.676128  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:45.714599  306360 cri.go:89] found id: ""
	I0407 14:15:45.714626  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.714637  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:45.714648  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:45.714665  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:45.780477  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:45.780527  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:45.794822  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:45.794859  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:45.866895  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:45.866921  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:45.866944  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:45.951585  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:45.951615  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:48.488815  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:48.507944  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:48.508026  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:48.551257  306360 cri.go:89] found id: ""
	I0407 14:15:48.551300  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.551314  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:48.551324  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:48.551402  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:48.595600  306360 cri.go:89] found id: ""
	I0407 14:15:48.595626  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.595634  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:48.595640  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:48.595704  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:48.639221  306360 cri.go:89] found id: ""
	I0407 14:15:48.639248  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.639255  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:48.639261  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:48.639326  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:48.680520  306360 cri.go:89] found id: ""
	I0407 14:15:48.680562  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.680575  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:48.680585  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:48.680679  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:48.728260  306360 cri.go:89] found id: ""
	I0407 14:15:48.728300  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.728315  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:48.728326  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:48.728410  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:48.773839  306360 cri.go:89] found id: ""
	I0407 14:15:48.773875  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.773886  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:48.773893  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:48.773955  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:48.814915  306360 cri.go:89] found id: ""
	I0407 14:15:48.814947  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.814957  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:48.814963  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:48.815028  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:48.860191  306360 cri.go:89] found id: ""
	I0407 14:15:48.860225  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.860245  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:48.860258  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:48.860273  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:48.922676  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:48.922714  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:48.939569  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:48.939618  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:49.016199  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:49.016225  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:49.016248  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:49.097968  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:49.098013  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:51.641164  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:51.655473  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:51.655548  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:51.690008  306360 cri.go:89] found id: ""
	I0407 14:15:51.690036  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.690047  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:51.690055  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:51.690118  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:51.728115  306360 cri.go:89] found id: ""
	I0407 14:15:51.728141  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.728150  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:51.728157  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:51.728222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:51.764117  306360 cri.go:89] found id: ""
	I0407 14:15:51.764156  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.764168  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:51.764180  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:51.764243  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:51.801243  306360 cri.go:89] found id: ""
	I0407 14:15:51.801279  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.801291  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:51.801299  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:51.801363  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:51.838262  306360 cri.go:89] found id: ""
	I0407 14:15:51.838292  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.838302  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:51.838310  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:51.838378  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:51.880251  306360 cri.go:89] found id: ""
	I0407 14:15:51.880284  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.880294  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:51.880302  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:51.880373  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:51.922175  306360 cri.go:89] found id: ""
	I0407 14:15:51.922203  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.922213  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:51.922220  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:51.922291  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:51.963932  306360 cri.go:89] found id: ""
	I0407 14:15:51.963960  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.963970  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:51.963985  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:51.964000  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:52.046274  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:52.046322  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:52.093979  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:52.094019  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:52.148613  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:52.148660  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:52.162525  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:52.162559  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:52.239788  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:54.740063  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:54.757191  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:54.757267  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:54.789524  306360 cri.go:89] found id: ""
	I0407 14:15:54.789564  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.789575  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:54.789584  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:54.789646  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:54.823746  306360 cri.go:89] found id: ""
	I0407 14:15:54.823785  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.823797  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:54.823805  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:54.823875  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:54.861371  306360 cri.go:89] found id: ""
	I0407 14:15:54.861406  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.861417  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:54.861424  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:54.861486  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:54.896286  306360 cri.go:89] found id: ""
	I0407 14:15:54.896318  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.896327  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:54.896334  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:54.896402  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:54.938594  306360 cri.go:89] found id: ""
	I0407 14:15:54.938632  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.938643  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:54.938651  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:54.938722  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:54.971701  306360 cri.go:89] found id: ""
	I0407 14:15:54.971737  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.971745  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:54.971751  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:54.971809  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:55.008651  306360 cri.go:89] found id: ""
	I0407 14:15:55.008682  306360 logs.go:282] 0 containers: []
	W0407 14:15:55.008693  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:55.008700  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:55.008768  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:55.043829  306360 cri.go:89] found id: ""
	I0407 14:15:55.043860  306360 logs.go:282] 0 containers: []
	W0407 14:15:55.043868  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:55.043879  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:55.043899  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:55.094682  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:55.094720  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:55.109798  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:55.109855  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:55.187514  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:55.187540  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:55.187555  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:55.273313  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:55.273360  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:57.811712  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:57.825529  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:57.825597  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:57.863098  306360 cri.go:89] found id: ""
	I0407 14:15:57.863139  306360 logs.go:282] 0 containers: []
	W0407 14:15:57.863152  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:57.863160  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:57.863231  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:57.902011  306360 cri.go:89] found id: ""
	I0407 14:15:57.902049  306360 logs.go:282] 0 containers: []
	W0407 14:15:57.902059  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:57.902067  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:57.902134  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:57.965448  306360 cri.go:89] found id: ""
	I0407 14:15:57.965475  306360 logs.go:282] 0 containers: []
	W0407 14:15:57.965485  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:57.965492  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:57.965554  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:58.012478  306360 cri.go:89] found id: ""
	I0407 14:15:58.012508  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.012519  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:58.012528  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:58.012591  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:58.046324  306360 cri.go:89] found id: ""
	I0407 14:15:58.046352  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.046359  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:58.046365  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:58.046416  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:58.082655  306360 cri.go:89] found id: ""
	I0407 14:15:58.082690  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.082701  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:58.082771  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:58.082845  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:58.117888  306360 cri.go:89] found id: ""
	I0407 14:15:58.117917  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.117929  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:58.117936  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:58.118002  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:58.158074  306360 cri.go:89] found id: ""
	I0407 14:15:58.158100  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.158110  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:58.158122  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:58.158140  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:58.250799  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:58.250823  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:58.250839  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:58.331250  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:58.331289  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:58.373589  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:58.373616  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:58.441487  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:58.441523  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:00.956209  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:00.969519  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:00.969597  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:01.006091  306360 cri.go:89] found id: ""
	I0407 14:16:01.006123  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.006134  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:01.006142  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:01.006208  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:01.040220  306360 cri.go:89] found id: ""
	I0407 14:16:01.040251  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.040262  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:01.040271  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:01.040341  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:01.075777  306360 cri.go:89] found id: ""
	I0407 14:16:01.075813  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.075824  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:01.075829  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:01.075904  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:01.113161  306360 cri.go:89] found id: ""
	I0407 14:16:01.113188  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.113196  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:01.113202  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:01.113264  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:01.145743  306360 cri.go:89] found id: ""
	I0407 14:16:01.145781  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.145793  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:01.145800  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:01.145891  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:01.180531  306360 cri.go:89] found id: ""
	I0407 14:16:01.180564  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.180576  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:01.180585  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:01.180651  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:01.219646  306360 cri.go:89] found id: ""
	I0407 14:16:01.219679  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.219691  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:01.219699  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:01.219765  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:01.262312  306360 cri.go:89] found id: ""
	I0407 14:16:01.262345  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.262352  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:01.262363  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:01.262377  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:01.339749  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:01.339783  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:01.382985  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:01.383022  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:01.434889  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:01.434921  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:01.451353  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:01.451378  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:01.532064  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:04.032625  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:04.045945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:04.046004  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:04.079093  306360 cri.go:89] found id: ""
	I0407 14:16:04.079123  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.079134  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:04.079143  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:04.079206  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:04.114148  306360 cri.go:89] found id: ""
	I0407 14:16:04.114181  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.114192  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:04.114200  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:04.114270  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:04.152718  306360 cri.go:89] found id: ""
	I0407 14:16:04.152747  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.152758  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:04.152766  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:04.152841  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:04.190031  306360 cri.go:89] found id: ""
	I0407 14:16:04.190065  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.190077  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:04.190085  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:04.190163  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:04.227623  306360 cri.go:89] found id: ""
	I0407 14:16:04.227660  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.227671  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:04.227679  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:04.227747  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:04.268005  306360 cri.go:89] found id: ""
	I0407 14:16:04.268035  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.268047  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:04.268055  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:04.268125  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:04.304340  306360 cri.go:89] found id: ""
	I0407 14:16:04.304364  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.304374  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:04.304381  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:04.304456  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:04.341425  306360 cri.go:89] found id: ""
	I0407 14:16:04.341490  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.341502  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:04.341513  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:04.341526  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:04.398148  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:04.398179  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:04.414586  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:04.414612  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:04.482621  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:04.482650  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:04.482669  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:04.556315  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:04.556359  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:07.115968  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:07.129613  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:07.129672  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:07.167142  306360 cri.go:89] found id: ""
	I0407 14:16:07.167170  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.167180  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:07.167187  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:07.167246  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:07.198691  306360 cri.go:89] found id: ""
	I0407 14:16:07.198723  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.198730  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:07.198736  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:07.198790  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:07.231226  306360 cri.go:89] found id: ""
	I0407 14:16:07.231259  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.231268  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:07.231274  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:07.231326  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:07.263714  306360 cri.go:89] found id: ""
	I0407 14:16:07.263746  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.263757  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:07.263765  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:07.263828  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:07.301046  306360 cri.go:89] found id: ""
	I0407 14:16:07.301079  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.301090  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:07.301098  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:07.301189  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:07.333910  306360 cri.go:89] found id: ""
	I0407 14:16:07.333938  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.333948  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:07.333956  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:07.334023  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:07.366899  306360 cri.go:89] found id: ""
	I0407 14:16:07.366927  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.366937  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:07.366945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:07.367014  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:07.398845  306360 cri.go:89] found id: ""
	I0407 14:16:07.398878  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.398887  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:07.398899  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:07.398912  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:07.411632  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:07.411663  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:07.478836  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:07.478865  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:07.478883  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:07.557802  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:07.557852  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:07.602752  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:07.602785  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:10.155705  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:10.169146  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:10.169232  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:10.202657  306360 cri.go:89] found id: ""
	I0407 14:16:10.202694  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.202702  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:10.202708  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:10.202761  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:10.238239  306360 cri.go:89] found id: ""
	I0407 14:16:10.238272  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.238284  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:10.238292  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:10.238363  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:10.270804  306360 cri.go:89] found id: ""
	I0407 14:16:10.270833  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.270840  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:10.270847  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:10.270897  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:10.319453  306360 cri.go:89] found id: ""
	I0407 14:16:10.319491  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.319502  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:10.319510  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:10.319581  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:10.352622  306360 cri.go:89] found id: ""
	I0407 14:16:10.352654  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.352663  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:10.352670  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:10.352741  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:10.385869  306360 cri.go:89] found id: ""
	I0407 14:16:10.385897  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.385906  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:10.385912  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:10.385979  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:10.420689  306360 cri.go:89] found id: ""
	I0407 14:16:10.420715  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.420724  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:10.420729  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:10.420786  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:10.454182  306360 cri.go:89] found id: ""
	I0407 14:16:10.454210  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.454226  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:10.454238  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:10.454258  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:10.467987  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:10.468021  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:10.535621  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:10.535650  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:10.535663  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:10.613921  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:10.613963  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:10.663267  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:10.663299  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:13.220167  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:13.234197  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:13.234271  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:13.273116  306360 cri.go:89] found id: ""
	I0407 14:16:13.273159  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.273174  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:13.273180  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:13.273236  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:13.309984  306360 cri.go:89] found id: ""
	I0407 14:16:13.310024  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.310036  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:13.310044  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:13.310110  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:13.343107  306360 cri.go:89] found id: ""
	I0407 14:16:13.343145  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.343156  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:13.343162  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:13.343226  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:13.375826  306360 cri.go:89] found id: ""
	I0407 14:16:13.375857  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.375865  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:13.375871  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:13.375934  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:13.408895  306360 cri.go:89] found id: ""
	I0407 14:16:13.408930  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.408940  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:13.408945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:13.409002  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:13.442272  306360 cri.go:89] found id: ""
	I0407 14:16:13.442309  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.442319  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:13.442329  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:13.442395  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:13.478556  306360 cri.go:89] found id: ""
	I0407 14:16:13.478592  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.478600  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:13.478606  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:13.478671  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:13.512229  306360 cri.go:89] found id: ""
	I0407 14:16:13.512264  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.512274  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:13.512287  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:13.512304  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:13.561858  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:13.561899  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:13.575518  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:13.575549  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:13.638490  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:13.638515  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:13.638528  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:13.714178  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:13.714219  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:16.252354  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:16.265849  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:16.265939  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:16.298742  306360 cri.go:89] found id: ""
	I0407 14:16:16.298774  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.298781  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:16.298788  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:16.298844  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:16.332441  306360 cri.go:89] found id: ""
	I0407 14:16:16.332476  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.332487  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:16.332496  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:16.332563  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:16.365820  306360 cri.go:89] found id: ""
	I0407 14:16:16.365857  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.365868  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:16.365880  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:16.365972  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:16.399094  306360 cri.go:89] found id: ""
	I0407 14:16:16.399125  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.399134  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:16.399140  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:16.399193  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:16.433322  306360 cri.go:89] found id: ""
	I0407 14:16:16.433356  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.433364  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:16.433372  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:16.433428  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:16.466435  306360 cri.go:89] found id: ""
	I0407 14:16:16.466466  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.466476  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:16.466484  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:16.466551  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:16.498858  306360 cri.go:89] found id: ""
	I0407 14:16:16.498887  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.498895  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:16.498900  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:16.498952  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:16.531126  306360 cri.go:89] found id: ""
	I0407 14:16:16.531166  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.531177  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:16.531192  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:16.531206  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:16.610817  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:16.610857  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:16.650145  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:16.650180  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:16.699735  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:16.699821  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:16.719603  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:16.719634  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:16.813399  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:19.315126  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:19.327908  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:19.327993  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:19.361834  306360 cri.go:89] found id: ""
	I0407 14:16:19.361868  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.361877  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:19.361883  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:19.361947  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:19.396519  306360 cri.go:89] found id: ""
	I0407 14:16:19.396554  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.396565  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:19.396573  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:19.396645  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:19.431627  306360 cri.go:89] found id: ""
	I0407 14:16:19.431656  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.431665  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:19.431671  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:19.431741  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:19.465284  306360 cri.go:89] found id: ""
	I0407 14:16:19.465315  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.465323  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:19.465332  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:19.465393  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:19.497940  306360 cri.go:89] found id: ""
	I0407 14:16:19.497970  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.497984  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:19.497991  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:19.498060  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:19.533336  306360 cri.go:89] found id: ""
	I0407 14:16:19.533376  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.533389  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:19.533398  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:19.533469  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:19.568026  306360 cri.go:89] found id: ""
	I0407 14:16:19.568059  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.568076  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:19.568084  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:19.568153  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:19.601780  306360 cri.go:89] found id: ""
	I0407 14:16:19.601835  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.601844  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:19.601854  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:19.601865  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:19.642543  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:19.642574  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:19.692073  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:19.692119  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:19.705748  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:19.705783  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:19.772531  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:19.772556  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:19.772577  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:22.351857  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:22.365447  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:22.365514  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:22.403999  306360 cri.go:89] found id: ""
	I0407 14:16:22.404028  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.404036  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:22.404043  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:22.404094  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:22.441384  306360 cri.go:89] found id: ""
	I0407 14:16:22.441417  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.441426  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:22.441432  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:22.441487  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:22.490577  306360 cri.go:89] found id: ""
	I0407 14:16:22.490610  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.490621  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:22.490628  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:22.490714  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:22.537991  306360 cri.go:89] found id: ""
	I0407 14:16:22.538028  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.538040  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:22.538049  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:22.538120  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:22.584777  306360 cri.go:89] found id: ""
	I0407 14:16:22.584812  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.584824  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:22.584832  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:22.584920  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:22.627558  306360 cri.go:89] found id: ""
	I0407 14:16:22.627588  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.627596  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:22.627602  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:22.627665  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:22.664048  306360 cri.go:89] found id: ""
	I0407 14:16:22.664080  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.664089  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:22.664125  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:22.664180  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:22.697281  306360 cri.go:89] found id: ""
	I0407 14:16:22.697318  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.697329  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:22.697345  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:22.697360  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:22.750380  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:22.750418  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:22.764135  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:22.764163  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:22.830720  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:22.830756  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:22.830775  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:22.910687  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:22.910728  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:25.452699  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:25.466127  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:25.466217  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:25.503288  306360 cri.go:89] found id: ""
	I0407 14:16:25.503320  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.503329  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:25.503335  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:25.503395  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:25.535855  306360 cri.go:89] found id: ""
	I0407 14:16:25.535891  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.535900  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:25.535907  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:25.535969  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:25.569103  306360 cri.go:89] found id: ""
	I0407 14:16:25.569135  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.569143  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:25.569149  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:25.569201  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:25.604482  306360 cri.go:89] found id: ""
	I0407 14:16:25.604521  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.604533  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:25.604542  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:25.604600  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:25.638915  306360 cri.go:89] found id: ""
	I0407 14:16:25.638948  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.638958  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:25.638966  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:25.639042  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:25.673087  306360 cri.go:89] found id: ""
	I0407 14:16:25.673122  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.673134  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:25.673141  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:25.673211  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:25.706454  306360 cri.go:89] found id: ""
	I0407 14:16:25.706490  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.706502  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:25.706511  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:25.706596  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:25.739824  306360 cri.go:89] found id: ""
	I0407 14:16:25.739861  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.739872  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:25.739885  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:25.739900  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:25.818002  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:25.818045  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:25.866681  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:25.866715  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:25.920791  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:25.920824  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:25.934838  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:25.934870  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:26.005417  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:28.507450  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:28.526968  306360 kubeadm.go:597] duration metric: took 4m4.425341549s to restartPrimaryControlPlane
	W0407 14:16:28.527068  306360 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0407 14:16:28.527097  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 14:16:33.604963  306360 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.077840903s)
	I0407 14:16:33.605045  306360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:16:33.619392  306360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 14:16:33.629694  306360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:16:33.639997  306360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:16:33.640021  306360 kubeadm.go:157] found existing configuration files:
	
	I0407 14:16:33.640070  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:16:33.648891  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:16:33.648942  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:16:33.657964  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:16:33.666862  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:16:33.666907  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:16:33.675917  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:16:33.684806  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:16:33.684865  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:16:33.694385  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:16:33.703347  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:16:33.703399  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:16:33.712413  306360 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 14:16:33.785507  306360 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 14:16:33.785591  306360 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 14:16:33.919661  306360 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 14:16:33.919797  306360 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 14:16:33.919913  306360 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 14:16:34.088006  306360 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:16:34.090058  306360 out.go:235]   - Generating certificates and keys ...
	I0407 14:16:34.090179  306360 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 14:16:34.090273  306360 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 14:16:34.090394  306360 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 14:16:34.090467  306360 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 14:16:34.090559  306360 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 14:16:34.090629  306360 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 14:16:34.090692  306360 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 14:16:34.090745  306360 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 14:16:34.090963  306360 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 14:16:34.091371  306360 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 14:16:34.091513  306360 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 14:16:34.091573  306360 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:16:34.250084  306360 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:16:34.456551  306360 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:16:34.600069  306360 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:16:34.730872  306360 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:16:34.745839  306360 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:16:34.748203  306360 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:16:34.748481  306360 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 14:16:34.899583  306360 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:16:34.901383  306360 out.go:235]   - Booting up control plane ...
	I0407 14:16:34.901512  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:16:34.910634  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:16:34.913019  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:16:34.913965  306360 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:16:34.916441  306360 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 14:17:14.918244  306360 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 14:17:14.918361  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:14.918550  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:17:19.918793  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:19.919063  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:17:29.919626  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:29.919857  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:17:49.920620  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:49.920914  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:18:29.922713  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:18:29.922989  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:18:29.923024  306360 kubeadm.go:310] 
	I0407 14:18:29.923100  306360 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 14:18:29.923192  306360 kubeadm.go:310] 		timed out waiting for the condition
	I0407 14:18:29.923212  306360 kubeadm.go:310] 
	I0407 14:18:29.923266  306360 kubeadm.go:310] 	This error is likely caused by:
	I0407 14:18:29.923310  306360 kubeadm.go:310] 		- The kubelet is not running
	I0407 14:18:29.923461  306360 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 14:18:29.923472  306360 kubeadm.go:310] 
	I0407 14:18:29.923695  306360 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 14:18:29.923740  306360 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 14:18:29.923826  306360 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 14:18:29.923853  306360 kubeadm.go:310] 
	I0407 14:18:29.924004  306360 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 14:18:29.924126  306360 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 14:18:29.924136  306360 kubeadm.go:310] 
	I0407 14:18:29.924282  306360 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 14:18:29.924392  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 14:18:29.924528  306360 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 14:18:29.924627  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 14:18:29.924654  306360 kubeadm.go:310] 
	I0407 14:18:29.924807  306360 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:18:29.924945  306360 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 14:18:29.925037  306360 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0407 14:18:29.925275  306360 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0407 14:18:29.925332  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 14:18:35.351481  306360 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.426121458s)
	I0407 14:18:35.351559  306360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:18:35.365827  306360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:18:35.376549  306360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:18:35.376577  306360 kubeadm.go:157] found existing configuration files:
	
	I0407 14:18:35.376637  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:18:35.386629  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:18:35.386696  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:18:35.397247  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:18:35.406945  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:18:35.407018  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:18:35.416924  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:18:35.426596  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:18:35.426665  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:18:35.436695  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:18:35.446316  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:18:35.446368  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:18:35.455990  306360 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 14:18:35.529786  306360 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 14:18:35.529882  306360 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 14:18:35.669860  306360 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 14:18:35.670044  306360 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 14:18:35.670206  306360 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 14:18:35.849445  306360 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:18:35.856509  306360 out.go:235]   - Generating certificates and keys ...
	I0407 14:18:35.856606  306360 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 14:18:35.856681  306360 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 14:18:35.856771  306360 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 14:18:35.856853  306360 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 14:18:35.856956  306360 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 14:18:35.857016  306360 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 14:18:35.857075  306360 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 14:18:35.857126  306360 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 14:18:35.857196  306360 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 14:18:35.857268  306360 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 14:18:35.857304  306360 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 14:18:35.857357  306360 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:18:35.974809  306360 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:18:36.175364  306360 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:18:36.293266  306360 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:18:36.465625  306360 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:18:36.480525  306360 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:18:36.481848  306360 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:18:36.481922  306360 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 14:18:36.613415  306360 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:18:36.615110  306360 out.go:235]   - Booting up control plane ...
	I0407 14:18:36.615269  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:18:36.628134  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:18:36.629532  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:18:36.630589  306360 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:18:36.634513  306360 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 14:19:16.636775  306360 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 14:19:16.637057  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:16.637316  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:19:21.638264  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:21.638529  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:19:31.638701  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:31.638962  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:19:51.638889  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:51.639128  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:20:31.638384  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:20:31.638644  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:20:31.638668  306360 kubeadm.go:310] 
	I0407 14:20:31.638702  306360 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 14:20:31.638742  306360 kubeadm.go:310] 		timed out waiting for the condition
	I0407 14:20:31.638748  306360 kubeadm.go:310] 
	I0407 14:20:31.638775  306360 kubeadm.go:310] 	This error is likely caused by:
	I0407 14:20:31.638810  306360 kubeadm.go:310] 		- The kubelet is not running
	I0407 14:20:31.638898  306360 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 14:20:31.638904  306360 kubeadm.go:310] 
	I0407 14:20:31.638985  306360 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 14:20:31.639023  306360 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 14:20:31.639065  306360 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 14:20:31.639072  306360 kubeadm.go:310] 
	I0407 14:20:31.639203  306360 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 14:20:31.639327  306360 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 14:20:31.639358  306360 kubeadm.go:310] 
	I0407 14:20:31.639513  306360 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 14:20:31.639633  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 14:20:31.639734  306360 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 14:20:31.639862  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 14:20:31.639875  306360 kubeadm.go:310] 
	I0407 14:20:31.640981  306360 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:20:31.641122  306360 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 14:20:31.641237  306360 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0407 14:20:31.641301  306360 kubeadm.go:394] duration metric: took 8m7.609204589s to StartCluster
	I0407 14:20:31.641373  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:20:31.641452  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:20:31.685303  306360 cri.go:89] found id: ""
	I0407 14:20:31.685334  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.685345  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:20:31.685353  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:20:31.685419  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:20:31.719244  306360 cri.go:89] found id: ""
	I0407 14:20:31.719274  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.719285  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:20:31.719293  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:20:31.719367  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:20:31.753252  306360 cri.go:89] found id: ""
	I0407 14:20:31.753282  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.753292  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:20:31.753299  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:20:31.753366  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:20:31.783957  306360 cri.go:89] found id: ""
	I0407 14:20:31.784001  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.784014  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:20:31.784024  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:20:31.784113  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:20:31.819615  306360 cri.go:89] found id: ""
	I0407 14:20:31.819652  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.819660  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:20:31.819666  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:20:31.819730  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:20:31.855903  306360 cri.go:89] found id: ""
	I0407 14:20:31.855942  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.855954  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:20:31.855962  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:20:31.856028  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:20:31.890988  306360 cri.go:89] found id: ""
	I0407 14:20:31.891018  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.891027  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:20:31.891033  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:20:31.891086  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:20:31.924794  306360 cri.go:89] found id: ""
	I0407 14:20:31.924827  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.924837  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:20:31.924861  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:20:31.924876  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:20:31.972904  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:20:31.972948  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:20:31.988056  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:20:31.988090  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:20:32.061617  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:20:32.061657  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:20:32.061672  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:20:32.165554  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:20:32.165600  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0407 14:20:32.208010  306360 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0407 14:20:32.208080  306360 out.go:270] * 
	* 
	W0407 14:20:32.208169  306360 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 14:20:32.208186  306360 out.go:270] * 
	* 
	W0407 14:20:32.209134  306360 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 14:20:32.213132  306360 out.go:201] 
	W0407 14:20:32.214433  306360 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 14:20:32.214485  306360 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0407 14:20:32.214528  306360 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0407 14:20:32.216101  306360 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-405646 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 2 (240.345227ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-405646 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-574417 image list                          | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-574417                                  | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-574417                                  | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-574417                                  | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	| delete  | -p embed-certs-574417                                  | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	| start   | -p newest-cni-541721 --memory=2200 --alsologtostderr   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:15 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | no-preload-421325 image list                           | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-421325                                   | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-421325                                   | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-421325                                   | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	| delete  | -p no-preload-421325                                   | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	| addons  | enable metrics-server -p newest-cni-541721             | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-718753                           | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | default-k8s-diff-port-718753                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | default-k8s-diff-port-718753                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | default-k8s-diff-port-718753                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | default-k8s-diff-port-718753                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-541721                  | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-541721 --memory=2200 --alsologtostderr   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-541721 image list                           | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	| delete  | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 14:15:25
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 14:15:25.628644  308831 out.go:345] Setting OutFile to fd 1 ...
	I0407 14:15:25.628943  308831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:15:25.628954  308831 out.go:358] Setting ErrFile to fd 2...
	I0407 14:15:25.628958  308831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:15:25.629163  308831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 14:15:25.629716  308831 out.go:352] Setting JSON to false
	I0407 14:15:25.630676  308831 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":21473,"bootTime":1744013853,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 14:15:25.630790  308831 start.go:139] virtualization: kvm guest
	I0407 14:15:25.632653  308831 out.go:177] * [newest-cni-541721] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 14:15:25.634114  308831 notify.go:220] Checking for updates...
	I0407 14:15:25.634125  308831 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 14:15:25.635477  308831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 14:15:25.636815  308831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:15:25.638126  308831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:15:25.639208  308831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 14:15:25.640304  308831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 14:15:25.642142  308831 config.go:182] Loaded profile config "newest-cni-541721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:15:25.642732  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:25.642805  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:25.658473  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45483
	I0407 14:15:25.659219  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:25.659736  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:25.659760  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:25.660180  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:25.660352  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:25.660628  308831 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 14:15:25.660918  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:25.660962  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:25.676620  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42011
	I0407 14:15:25.677061  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:25.677654  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:25.677687  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:25.678106  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:25.678327  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:25.714508  308831 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 14:15:25.715654  308831 start.go:297] selected driver: kvm2
	I0407 14:15:25.715669  308831 start.go:901] validating driver "kvm2" against &{Name:newest-cni-541721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-541721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:15:25.715769  308831 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 14:15:25.716608  308831 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:15:25.716681  308831 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 14:15:25.731568  308831 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 14:15:25.731948  308831 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0407 14:15:25.731981  308831 cni.go:84] Creating CNI manager for ""
	I0407 14:15:25.732021  308831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:15:25.732057  308831 start.go:340] cluster config:
	{Name:newest-cni-541721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-541721 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:15:25.732169  308831 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:15:25.734706  308831 out.go:177] * Starting "newest-cni-541721" primary control-plane node in "newest-cni-541721" cluster
	I0407 14:15:25.736251  308831 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 14:15:25.736285  308831 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 14:15:25.736295  308831 cache.go:56] Caching tarball of preloaded images
	I0407 14:15:25.736375  308831 preload.go:172] Found /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 14:15:25.736390  308831 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 14:15:25.736522  308831 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/config.json ...
	I0407 14:15:25.736737  308831 start.go:360] acquireMachinesLock for newest-cni-541721: {Name:mkbc0d9211b04d7c322a45485d144adcd6ee59fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:15:25.736784  308831 start.go:364] duration metric: took 28.182µs to acquireMachinesLock for "newest-cni-541721"
	I0407 14:15:25.736805  308831 start.go:96] Skipping create...Using existing machine configuration
	I0407 14:15:25.736811  308831 fix.go:54] fixHost starting: 
	I0407 14:15:25.737111  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:25.737147  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:25.751728  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I0407 14:15:25.752219  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:25.752697  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:25.752718  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:25.753019  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:25.753228  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:25.753385  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:25.754926  308831 fix.go:112] recreateIfNeeded on newest-cni-541721: state=Stopped err=<nil>
	I0407 14:15:25.754953  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	W0407 14:15:25.755089  308831 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 14:15:25.757704  308831 out.go:177] * Restarting existing kvm2 VM for "newest-cni-541721" ...
	I0407 14:15:20.896637  306360 cri.go:89] found id: ""
	I0407 14:15:20.896666  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.896673  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:20.896679  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:20.896737  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:20.937796  306360 cri.go:89] found id: ""
	I0407 14:15:20.937828  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.937837  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:20.937843  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:20.937896  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:20.983104  306360 cri.go:89] found id: ""
	I0407 14:15:20.983138  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.983149  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:20.983157  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:20.983222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:21.024555  306360 cri.go:89] found id: ""
	I0407 14:15:21.024591  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.024602  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:21.024609  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:21.024685  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:21.068400  306360 cri.go:89] found id: ""
	I0407 14:15:21.068484  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.068495  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:21.068502  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:21.068572  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:21.107962  306360 cri.go:89] found id: ""
	I0407 14:15:21.107990  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.107998  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:21.108004  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:21.108067  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:21.147955  306360 cri.go:89] found id: ""
	I0407 14:15:21.147981  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.147989  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:21.147999  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:21.148010  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:21.164790  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:21.164818  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:21.236045  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:21.236068  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:21.236081  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:21.313784  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:21.313821  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:21.357183  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:21.357215  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:23.907736  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:23.921413  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:23.921481  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:23.959486  306360 cri.go:89] found id: ""
	I0407 14:15:23.959513  306360 logs.go:282] 0 containers: []
	W0407 14:15:23.959520  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:23.959526  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:23.959585  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:23.992912  306360 cri.go:89] found id: ""
	I0407 14:15:23.992938  306360 logs.go:282] 0 containers: []
	W0407 14:15:23.992946  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:23.992952  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:23.993010  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:24.024279  306360 cri.go:89] found id: ""
	I0407 14:15:24.024308  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.024316  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:24.024323  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:24.024376  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:24.062320  306360 cri.go:89] found id: ""
	I0407 14:15:24.062353  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.062362  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:24.062371  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:24.062432  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:24.122748  306360 cri.go:89] found id: ""
	I0407 14:15:24.122774  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.122782  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:24.122787  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:24.122857  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:24.156773  306360 cri.go:89] found id: ""
	I0407 14:15:24.156803  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.156814  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:24.156831  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:24.156899  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:24.192903  306360 cri.go:89] found id: ""
	I0407 14:15:24.192940  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.192952  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:24.192960  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:24.193017  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:24.228041  306360 cri.go:89] found id: ""
	I0407 14:15:24.228081  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.228093  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:24.228105  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:24.228122  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:24.276177  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:24.276212  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:24.289668  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:24.289701  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:24.356935  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:24.356962  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:24.356981  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:24.442103  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:24.442140  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:25.758835  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Start
	I0407 14:15:25.759008  308831 main.go:141] libmachine: (newest-cni-541721) starting domain...
	I0407 14:15:25.759031  308831 main.go:141] libmachine: (newest-cni-541721) ensuring networks are active...
	I0407 14:15:25.759774  308831 main.go:141] libmachine: (newest-cni-541721) Ensuring network default is active
	I0407 14:15:25.760125  308831 main.go:141] libmachine: (newest-cni-541721) Ensuring network mk-newest-cni-541721 is active
	I0407 14:15:25.760533  308831 main.go:141] libmachine: (newest-cni-541721) getting domain XML...
	I0407 14:15:25.761459  308831 main.go:141] libmachine: (newest-cni-541721) creating domain...
	I0407 14:15:26.961388  308831 main.go:141] libmachine: (newest-cni-541721) waiting for IP...
	I0407 14:15:26.962280  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:26.962679  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:26.962806  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:26.962715  308884 retry.go:31] will retry after 224.710577ms: waiting for domain to come up
	I0407 14:15:27.189309  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:27.189924  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:27.189984  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:27.189909  308884 retry.go:31] will retry after 298.222768ms: waiting for domain to come up
	I0407 14:15:27.489516  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:27.490094  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:27.490131  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:27.490026  308884 retry.go:31] will retry after 465.194234ms: waiting for domain to come up
	I0407 14:15:27.956675  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:27.957258  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:27.957283  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:27.957226  308884 retry.go:31] will retry after 534.441737ms: waiting for domain to come up
	I0407 14:15:28.493247  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:28.493782  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:28.493811  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:28.493750  308884 retry.go:31] will retry after 611.035562ms: waiting for domain to come up
	I0407 14:15:29.106699  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:29.107212  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:29.107234  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:29.107187  308884 retry.go:31] will retry after 705.783816ms: waiting for domain to come up
	I0407 14:15:29.814350  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:29.814874  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:29.814904  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:29.814847  308884 retry.go:31] will retry after 951.819617ms: waiting for domain to come up
	I0407 14:15:26.983553  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:26.996033  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:26.996104  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:27.029665  306360 cri.go:89] found id: ""
	I0407 14:15:27.029692  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.029700  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:27.029705  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:27.029756  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:27.069962  306360 cri.go:89] found id: ""
	I0407 14:15:27.069992  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.070000  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:27.070009  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:27.070074  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:27.112142  306360 cri.go:89] found id: ""
	I0407 14:15:27.112174  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.112182  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:27.112188  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:27.112240  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:27.152647  306360 cri.go:89] found id: ""
	I0407 14:15:27.152675  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.152685  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:27.152691  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:27.152743  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:27.188973  306360 cri.go:89] found id: ""
	I0407 14:15:27.189004  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.189015  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:27.189023  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:27.189099  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:27.228054  306360 cri.go:89] found id: ""
	I0407 14:15:27.228085  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.228095  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:27.228102  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:27.228164  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:27.262089  306360 cri.go:89] found id: ""
	I0407 14:15:27.262121  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.262131  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:27.262152  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:27.262222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:27.298902  306360 cri.go:89] found id: ""
	I0407 14:15:27.298939  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.298951  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:27.298969  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:27.298988  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:27.338649  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:27.338676  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:27.388606  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:27.388653  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:27.403449  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:27.403491  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:27.469414  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:27.469448  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:27.469467  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:30.052698  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:30.071454  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:30.071529  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:30.104690  306360 cri.go:89] found id: ""
	I0407 14:15:30.104723  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.104733  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:30.104741  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:30.104805  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:30.139611  306360 cri.go:89] found id: ""
	I0407 14:15:30.139641  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.139651  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:30.139658  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:30.139724  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:30.173648  306360 cri.go:89] found id: ""
	I0407 14:15:30.173679  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.173691  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:30.173702  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:30.173766  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:30.207015  306360 cri.go:89] found id: ""
	I0407 14:15:30.207045  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.207055  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:30.207062  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:30.207141  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:30.242602  306360 cri.go:89] found id: ""
	I0407 14:15:30.242631  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.242642  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:30.242647  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:30.242698  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:30.275775  306360 cri.go:89] found id: ""
	I0407 14:15:30.275811  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.275824  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:30.275834  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:30.275906  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:30.310674  306360 cri.go:89] found id: ""
	I0407 14:15:30.310710  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.310722  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:30.310734  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:30.310803  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:30.342628  306360 cri.go:89] found id: ""
	I0407 14:15:30.342666  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.342677  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:30.342690  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:30.342704  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:30.390588  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:30.390625  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:30.405143  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:30.405179  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:30.473557  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:30.473590  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:30.473607  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:30.555915  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:30.555961  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:30.768801  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:30.769309  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:30.769368  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:30.769289  308884 retry.go:31] will retry after 1.473723354s: waiting for domain to come up
	I0407 14:15:32.244907  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:32.245389  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:32.245420  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:32.245345  308884 retry.go:31] will retry after 1.499915681s: waiting for domain to come up
	I0407 14:15:33.747106  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:33.747641  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:33.747664  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:33.747621  308884 retry.go:31] will retry after 1.755869329s: waiting for domain to come up
	I0407 14:15:35.505715  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:35.506189  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:35.506224  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:35.506149  308884 retry.go:31] will retry after 1.908921296s: waiting for domain to come up
	I0407 14:15:33.094714  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:33.107818  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:33.107883  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:33.147279  306360 cri.go:89] found id: ""
	I0407 14:15:33.147310  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.147317  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:33.147323  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:33.147374  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:33.182866  306360 cri.go:89] found id: ""
	I0407 14:15:33.182895  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.182903  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:33.182909  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:33.182962  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:33.219845  306360 cri.go:89] found id: ""
	I0407 14:15:33.219881  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.219894  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:33.219903  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:33.219980  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:33.255785  306360 cri.go:89] found id: ""
	I0407 14:15:33.255818  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.255832  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:33.255838  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:33.255888  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:33.296287  306360 cri.go:89] found id: ""
	I0407 14:15:33.296320  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.296331  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:33.296339  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:33.296406  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:33.333123  306360 cri.go:89] found id: ""
	I0407 14:15:33.333156  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.333167  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:33.333174  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:33.333244  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:33.367813  306360 cri.go:89] found id: ""
	I0407 14:15:33.367844  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.367855  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:33.367862  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:33.367930  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:33.401927  306360 cri.go:89] found id: ""
	I0407 14:15:33.401957  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.401964  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:33.401974  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:33.401985  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:33.464350  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:33.464390  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:33.478831  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:33.478866  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:33.554322  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:33.554352  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:33.554370  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:33.632339  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:33.632381  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:37.417168  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:37.417658  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:37.417734  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:37.417635  308884 retry.go:31] will retry after 3.116726133s: waiting for domain to come up
	I0407 14:15:40.537848  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:40.538357  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:40.538386  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:40.538314  308884 retry.go:31] will retry after 2.7485631s: waiting for domain to come up
	I0407 14:15:36.177635  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:36.191117  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:36.191215  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:36.229342  306360 cri.go:89] found id: ""
	I0407 14:15:36.229373  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.229384  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:36.229391  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:36.229461  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:36.269119  306360 cri.go:89] found id: ""
	I0407 14:15:36.269151  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.269162  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:36.269170  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:36.269236  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:36.312510  306360 cri.go:89] found id: ""
	I0407 14:15:36.312544  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.312556  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:36.312563  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:36.312632  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:36.346706  306360 cri.go:89] found id: ""
	I0407 14:15:36.346741  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.346753  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:36.346762  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:36.346830  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:36.382862  306360 cri.go:89] found id: ""
	I0407 14:15:36.382899  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.382912  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:36.382920  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:36.382989  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:36.424287  306360 cri.go:89] found id: ""
	I0407 14:15:36.424318  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.424329  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:36.424337  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:36.424407  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:36.473843  306360 cri.go:89] found id: ""
	I0407 14:15:36.473891  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.473906  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:36.473916  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:36.474002  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:36.532647  306360 cri.go:89] found id: ""
	I0407 14:15:36.532685  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.532697  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:36.532711  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:36.532727  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:36.599779  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:36.599820  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:36.614047  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:36.614082  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:36.692006  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:36.692030  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:36.692044  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:36.782142  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:36.782196  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:39.320544  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:39.333558  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:39.333630  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:39.367209  306360 cri.go:89] found id: ""
	I0407 14:15:39.367244  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.367255  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:39.367264  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:39.367338  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:39.406298  306360 cri.go:89] found id: ""
	I0407 14:15:39.406326  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.406335  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:39.406342  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:39.406407  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:39.440090  306360 cri.go:89] found id: ""
	I0407 14:15:39.440118  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.440128  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:39.440134  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:39.440197  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:39.473483  306360 cri.go:89] found id: ""
	I0407 14:15:39.473514  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.473527  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:39.473534  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:39.473602  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:39.505571  306360 cri.go:89] found id: ""
	I0407 14:15:39.505599  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.505607  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:39.505613  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:39.505676  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:39.538929  306360 cri.go:89] found id: ""
	I0407 14:15:39.538961  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.538971  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:39.538980  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:39.539045  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:39.572047  306360 cri.go:89] found id: ""
	I0407 14:15:39.572078  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.572089  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:39.572097  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:39.572163  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:39.605781  306360 cri.go:89] found id: ""
	I0407 14:15:39.605812  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.605854  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:39.605868  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:39.605885  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:39.684887  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:39.684931  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:39.725609  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:39.725639  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:39.776592  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:39.776634  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:39.792687  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:39.792719  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:39.859832  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:43.289843  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.290313  308831 main.go:141] libmachine: (newest-cni-541721) found domain IP: 192.168.39.230
	I0407 14:15:43.290342  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has current primary IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.290351  308831 main.go:141] libmachine: (newest-cni-541721) reserving static IP address...
	I0407 14:15:43.290797  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "newest-cni-541721", mac: "52:54:00:e6:36:ee", ip: "192.168.39.230"} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.290844  308831 main.go:141] libmachine: (newest-cni-541721) DBG | skip adding static IP to network mk-newest-cni-541721 - found existing host DHCP lease matching {name: "newest-cni-541721", mac: "52:54:00:e6:36:ee", ip: "192.168.39.230"}
	I0407 14:15:43.290861  308831 main.go:141] libmachine: (newest-cni-541721) reserved static IP address 192.168.39.230 for domain newest-cni-541721
	I0407 14:15:43.290877  308831 main.go:141] libmachine: (newest-cni-541721) waiting for SSH...
	I0407 14:15:43.290888  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Getting to WaitForSSH function...
	I0407 14:15:43.293128  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.293457  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.293482  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.293603  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Using SSH client type: external
	I0407 14:15:43.293630  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Using SSH private key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa (-rw-------)
	I0407 14:15:43.293658  308831 main.go:141] libmachine: (newest-cni-541721) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 14:15:43.293670  308831 main.go:141] libmachine: (newest-cni-541721) DBG | About to run SSH command:
	I0407 14:15:43.293684  308831 main.go:141] libmachine: (newest-cni-541721) DBG | exit 0
	I0407 14:15:43.420319  308831 main.go:141] libmachine: (newest-cni-541721) DBG | SSH cmd err, output: <nil>: 
	I0407 14:15:43.420721  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetConfigRaw
	I0407 14:15:43.421390  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetIP
	I0407 14:15:43.424495  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.424838  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.424863  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.425125  308831 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/config.json ...
	I0407 14:15:43.425347  308831 machine.go:93] provisionDockerMachine start ...
	I0407 14:15:43.425369  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:43.425612  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:43.428118  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.428491  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.428518  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.428670  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:43.428877  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.429081  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.429220  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:43.429407  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:43.429675  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:43.429686  308831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 14:15:43.536790  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 14:15:43.536829  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetMachineName
	I0407 14:15:43.537083  308831 buildroot.go:166] provisioning hostname "newest-cni-541721"
	I0407 14:15:43.537120  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetMachineName
	I0407 14:15:43.537329  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:43.540191  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.540559  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.540585  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.540732  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:43.540899  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.541132  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.541282  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:43.541478  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:43.541679  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:43.541692  308831 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-541721 && echo "newest-cni-541721" | sudo tee /etc/hostname
	I0407 14:15:43.663263  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-541721
	
	I0407 14:15:43.663296  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:43.665913  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.666215  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.666245  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.666389  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:43.666571  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.666726  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.666878  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:43.667008  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:43.667209  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:43.667223  308831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-541721' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-541721/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-541721' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:15:43.781703  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:15:43.781735  308831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20598-242355/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-242355/.minikube}
	I0407 14:15:43.781770  308831 buildroot.go:174] setting up certificates
	I0407 14:15:43.781781  308831 provision.go:84] configureAuth start
	I0407 14:15:43.781789  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetMachineName
	I0407 14:15:43.782098  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetIP
	I0407 14:15:43.784807  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.785138  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.785165  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.785310  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:43.787964  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.788465  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.788506  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.788684  308831 provision.go:143] copyHostCerts
	I0407 14:15:43.788737  308831 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem, removing ...
	I0407 14:15:43.788762  308831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem
	I0407 14:15:43.788828  308831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem (1679 bytes)
	I0407 14:15:43.788909  308831 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem, removing ...
	I0407 14:15:43.788917  308831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem
	I0407 14:15:43.788941  308831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem (1078 bytes)
	I0407 14:15:43.789008  308831 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem, removing ...
	I0407 14:15:43.789016  308831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem
	I0407 14:15:43.789045  308831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem (1123 bytes)
	I0407 14:15:43.789089  308831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem org=jenkins.newest-cni-541721 san=[127.0.0.1 192.168.39.230 localhost minikube newest-cni-541721]
	I0407 14:15:44.038906  308831 provision.go:177] copyRemoteCerts
	I0407 14:15:44.038972  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:15:44.038998  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.041517  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.041889  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.041921  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.042056  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.042296  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.042445  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.042564  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:44.126574  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0407 14:15:44.150348  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 14:15:44.173128  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:15:44.196028  308831 provision.go:87] duration metric: took 414.219253ms to configureAuth
	I0407 14:15:44.196057  308831 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:15:44.196256  308831 config.go:182] Loaded profile config "newest-cni-541721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:15:44.196365  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.198992  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.199332  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.199359  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.199473  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.199649  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.199841  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.199983  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.200187  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:44.200392  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:44.200406  308831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 14:15:44.425698  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 14:15:44.425730  308831 machine.go:96] duration metric: took 1.00036936s to provisionDockerMachine
	I0407 14:15:44.425742  308831 start.go:293] postStartSetup for "newest-cni-541721" (driver="kvm2")
	I0407 14:15:44.425753  308831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:15:44.425769  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.426237  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:15:44.426282  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.428748  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.429105  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.429137  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.429312  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.429508  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.429691  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.429839  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:44.514924  308831 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:15:44.519014  308831 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:15:44.519041  308831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/addons for local assets ...
	I0407 14:15:44.519105  308831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/files for local assets ...
	I0407 14:15:44.519203  308831 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem -> 2495162.pem in /etc/ssl/certs
	I0407 14:15:44.519338  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:15:44.528306  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:15:44.552208  308831 start.go:296] duration metric: took 126.448126ms for postStartSetup
	I0407 14:15:44.552258  308831 fix.go:56] duration metric: took 18.815446562s for fixHost
	I0407 14:15:44.552283  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.555012  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.555411  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.555436  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.555613  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.555777  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.555921  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.556086  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.556274  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:44.556581  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:44.556596  308831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 14:15:44.665315  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744035344.637882085
	
	I0407 14:15:44.665344  308831 fix.go:216] guest clock: 1744035344.637882085
	I0407 14:15:44.665352  308831 fix.go:229] Guest: 2025-04-07 14:15:44.637882085 +0000 UTC Remote: 2025-04-07 14:15:44.552262543 +0000 UTC m=+18.960633497 (delta=85.619542ms)
	I0407 14:15:44.665378  308831 fix.go:200] guest clock delta is within tolerance: 85.619542ms
	I0407 14:15:44.665385  308831 start.go:83] releasing machines lock for "newest-cni-541721", held for 18.928588169s
	I0407 14:15:44.665411  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.665665  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetIP
	I0407 14:15:44.668359  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.668769  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.668796  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.669001  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.669473  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.669663  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.669764  308831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 14:15:44.669821  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.669881  308831 ssh_runner.go:195] Run: cat /version.json
	I0407 14:15:44.669903  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.672537  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.672728  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.672882  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.672910  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.673079  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.673108  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.673126  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.673306  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.673329  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.673471  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.673479  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.673639  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.673629  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:44.673808  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:44.772603  308831 ssh_runner.go:195] Run: systemctl --version
	I0407 14:15:44.778824  308831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 14:15:44.927200  308831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 14:15:44.934229  308831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 14:15:44.934295  308831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 14:15:44.949862  308831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 14:15:44.949886  308831 start.go:495] detecting cgroup driver to use...
	I0407 14:15:44.949946  308831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 14:15:44.965426  308831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:15:44.978798  308831 docker.go:217] disabling cri-docker service (if available) ...
	I0407 14:15:44.978861  308831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 14:15:44.991899  308831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 14:15:45.004571  308831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 14:15:45.128809  308831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 14:15:45.285871  308831 docker.go:233] disabling docker service ...
	I0407 14:15:45.285943  308831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 14:15:45.300353  308831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 14:15:45.313521  308831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 14:15:45.446753  308831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 14:15:45.566017  308831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 14:15:45.581006  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 14:15:45.599340  308831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 14:15:45.599422  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.609965  308831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 14:15:45.610059  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.620860  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:42.361106  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:42.374378  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:42.374461  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:42.409267  306360 cri.go:89] found id: ""
	I0407 14:15:42.409296  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.409304  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:42.409309  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:42.409361  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:42.442512  306360 cri.go:89] found id: ""
	I0407 14:15:42.442540  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.442548  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:42.442554  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:42.442603  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:42.476016  306360 cri.go:89] found id: ""
	I0407 14:15:42.476044  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.476055  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:42.476063  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:42.476127  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:42.507103  306360 cri.go:89] found id: ""
	I0407 14:15:42.507138  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.507145  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:42.507151  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:42.507205  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:42.543140  306360 cri.go:89] found id: ""
	I0407 14:15:42.543167  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.543178  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:42.543185  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:42.543260  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:42.583718  306360 cri.go:89] found id: ""
	I0407 14:15:42.583749  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.583756  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:42.583764  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:42.583826  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:42.617614  306360 cri.go:89] found id: ""
	I0407 14:15:42.617649  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.617660  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:42.617668  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:42.617736  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:42.652193  306360 cri.go:89] found id: ""
	I0407 14:15:42.652220  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.652227  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:42.652237  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:42.652250  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:42.700778  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:42.700817  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:42.713926  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:42.713958  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:42.781552  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:42.781577  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:42.781590  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:42.857460  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:42.857502  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:45.397689  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:45.416022  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:45.416089  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:45.457038  306360 cri.go:89] found id: ""
	I0407 14:15:45.457078  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.457089  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:45.457097  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:45.457168  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:45.491527  306360 cri.go:89] found id: ""
	I0407 14:15:45.491559  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.491570  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:45.491578  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:45.491647  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:45.524296  306360 cri.go:89] found id: ""
	I0407 14:15:45.524333  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.524344  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:45.524352  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:45.524416  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:45.562418  306360 cri.go:89] found id: ""
	I0407 14:15:45.562450  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.562461  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:45.562469  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:45.562537  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:45.601384  306360 cri.go:89] found id: ""
	I0407 14:15:45.601409  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.601417  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:45.601423  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:45.601471  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:45.638899  306360 cri.go:89] found id: ""
	I0407 14:15:45.638924  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.638933  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:45.638939  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:45.639005  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:45.675994  306360 cri.go:89] found id: ""
	I0407 14:15:45.676031  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.676047  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:45.676064  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:45.676128  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:45.714599  306360 cri.go:89] found id: ""
	I0407 14:15:45.714626  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.714637  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:45.714648  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:45.714665  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:45.780477  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:45.780527  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:45.794822  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:45.794859  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:45.866895  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:45.866921  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:45.866944  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:45.631474  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.644263  308831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 14:15:45.658794  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.670123  308831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.689249  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.699508  308831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 14:15:45.709814  308831 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 14:15:45.709869  308831 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 14:15:45.723859  308831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 14:15:45.733593  308831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:15:45.849319  308831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 14:15:45.947041  308831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 14:15:45.947134  308831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 14:15:45.952013  308831 start.go:563] Will wait 60s for crictl version
	I0407 14:15:45.952094  308831 ssh_runner.go:195] Run: which crictl
	I0407 14:15:45.956063  308831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 14:15:46.003168  308831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 14:15:46.003266  308831 ssh_runner.go:195] Run: crio --version
	I0407 14:15:46.030604  308831 ssh_runner.go:195] Run: crio --version
	I0407 14:15:46.060415  308831 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 14:15:46.061532  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetIP
	I0407 14:15:46.064257  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:46.064649  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:46.064686  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:46.064942  308831 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0407 14:15:46.069108  308831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:15:46.082697  308831 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0407 14:15:46.083791  308831 kubeadm.go:883] updating cluster {Name:newest-cni-541721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-5
41721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 14:15:46.083896  308831 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 14:15:46.083950  308831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:15:46.117284  308831 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0407 14:15:46.117364  308831 ssh_runner.go:195] Run: which lz4
	I0407 14:15:46.121377  308831 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 14:15:46.125460  308831 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 14:15:46.125488  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0407 14:15:47.523799  308831 crio.go:462] duration metric: took 1.402446769s to copy over tarball
	I0407 14:15:47.523885  308831 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 14:15:49.780413  308831 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256487333s)
	I0407 14:15:49.780472  308831 crio.go:469] duration metric: took 2.256631266s to extract the tarball
	I0407 14:15:49.780484  308831 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 14:15:49.817617  308831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:15:49.861772  308831 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 14:15:49.861798  308831 cache_images.go:84] Images are preloaded, skipping loading
	I0407 14:15:49.861811  308831 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.32.2 crio true true} ...
	I0407 14:15:49.861914  308831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-541721 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-541721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 14:15:49.861982  308831 ssh_runner.go:195] Run: crio config
	I0407 14:15:49.906766  308831 cni.go:84] Creating CNI manager for ""
	I0407 14:15:49.906790  308831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:15:49.906799  308831 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0407 14:15:49.906821  308831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-541721 NodeName:newest-cni-541721 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 14:15:49.906963  308831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-541721"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.230"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 14:15:49.907028  308831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 14:15:49.917114  308831 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 14:15:49.917177  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 14:15:49.927296  308831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0407 14:15:49.945058  308831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 14:15:49.962171  308831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0407 14:15:49.981232  308831 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0407 14:15:49.985429  308831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:15:49.997919  308831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:15:50.112228  308831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:15:50.138008  308831 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721 for IP: 192.168.39.230
	I0407 14:15:50.138038  308831 certs.go:194] generating shared ca certs ...
	I0407 14:15:50.138056  308831 certs.go:226] acquiring lock for ca certs: {Name:mk1da0e2436b5b22d130d00c7c348c272ee34f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:15:50.138217  308831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key
	I0407 14:15:50.138257  308831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key
	I0407 14:15:50.138269  308831 certs.go:256] generating profile certs ...
	I0407 14:15:50.138383  308831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/client.key
	I0407 14:15:50.138463  308831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/apiserver.key.ae70fd14
	I0407 14:15:50.138512  308831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/proxy-client.key
	I0407 14:15:50.138669  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem (1338 bytes)
	W0407 14:15:50.138721  308831 certs.go:480] ignoring /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516_empty.pem, impossibly tiny 0 bytes
	I0407 14:15:50.138735  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 14:15:50.138774  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem (1078 bytes)
	I0407 14:15:50.138805  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem (1123 bytes)
	I0407 14:15:50.138835  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem (1679 bytes)
	I0407 14:15:50.138899  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:15:50.139675  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 14:15:50.197283  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 14:15:50.242193  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 14:15:50.269592  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 14:15:50.295620  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0407 14:15:50.326901  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 14:15:50.350149  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 14:15:50.373570  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 14:15:50.396967  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem --> /usr/share/ca-certificates/249516.pem (1338 bytes)
	I0407 14:15:50.419713  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /usr/share/ca-certificates/2495162.pem (1708 bytes)
	I0407 14:15:50.443345  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 14:15:50.466277  308831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 14:15:50.482772  308831 ssh_runner.go:195] Run: openssl version
	I0407 14:15:50.488692  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/249516.pem && ln -fs /usr/share/ca-certificates/249516.pem /etc/ssl/certs/249516.pem"
	I0407 14:15:50.499480  308831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/249516.pem
	I0407 14:15:50.504091  308831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 13:03 /usr/share/ca-certificates/249516.pem
	I0407 14:15:50.504182  308831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/249516.pem
	I0407 14:15:50.510343  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/249516.pem /etc/ssl/certs/51391683.0"
	I0407 14:15:50.521521  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2495162.pem && ln -fs /usr/share/ca-certificates/2495162.pem /etc/ssl/certs/2495162.pem"
	I0407 14:15:50.532621  308831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2495162.pem
	I0407 14:15:50.537354  308831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 13:03 /usr/share/ca-certificates/2495162.pem
	I0407 14:15:50.537410  308831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2495162.pem
	I0407 14:15:50.543022  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2495162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 14:15:50.554034  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 14:15:50.564979  308831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:15:50.569666  308831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:15:50.569727  308831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:15:50.575423  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 14:15:50.586213  308831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 14:15:50.590961  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 14:15:50.596887  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 14:15:50.602578  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 14:15:50.608528  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 14:15:50.614421  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 14:15:50.620333  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 14:15:50.626231  308831 kubeadm.go:392] StartCluster: {Name:newest-cni-541721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-5417
21 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:15:50.626391  308831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 14:15:50.626505  308831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 14:15:45.951585  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:45.951615  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:48.488815  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:48.507944  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:48.508026  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:48.551257  306360 cri.go:89] found id: ""
	I0407 14:15:48.551300  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.551314  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:48.551324  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:48.551402  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:48.595600  306360 cri.go:89] found id: ""
	I0407 14:15:48.595626  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.595634  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:48.595640  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:48.595704  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:48.639221  306360 cri.go:89] found id: ""
	I0407 14:15:48.639248  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.639255  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:48.639261  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:48.639326  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:48.680520  306360 cri.go:89] found id: ""
	I0407 14:15:48.680562  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.680575  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:48.680585  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:48.680679  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:48.728260  306360 cri.go:89] found id: ""
	I0407 14:15:48.728300  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.728315  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:48.728326  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:48.728410  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:48.773839  306360 cri.go:89] found id: ""
	I0407 14:15:48.773875  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.773886  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:48.773893  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:48.773955  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:48.814915  306360 cri.go:89] found id: ""
	I0407 14:15:48.814947  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.814957  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:48.814963  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:48.815028  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:48.860191  306360 cri.go:89] found id: ""
	I0407 14:15:48.860225  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.860245  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:48.860258  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:48.860273  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:48.922676  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:48.922714  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:48.939569  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:48.939618  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:49.016199  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:49.016225  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:49.016248  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:49.097968  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:49.098013  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:50.663771  308831 cri.go:89] found id: ""
	I0407 14:15:50.663873  308831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 14:15:50.674085  308831 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 14:15:50.674107  308831 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 14:15:50.674160  308831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 14:15:50.683827  308831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 14:15:50.684345  308831 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-541721" does not appear in /home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:15:50.684567  308831 kubeconfig.go:62] /home/jenkins/minikube-integration/20598-242355/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-541721" cluster setting kubeconfig missing "newest-cni-541721" context setting]
	I0407 14:15:50.684927  308831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/kubeconfig: {Name:mkef4208e7f217ec5ec7c15cd00232eac7047b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:15:50.686121  308831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 14:15:50.695269  308831 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.230
	I0407 14:15:50.695302  308831 kubeadm.go:1160] stopping kube-system containers ...
	I0407 14:15:50.695314  308831 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0407 14:15:50.695355  308831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 14:15:50.736911  308831 cri.go:89] found id: ""
	I0407 14:15:50.737008  308831 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0407 14:15:50.753425  308831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:15:50.765206  308831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:15:50.765225  308831 kubeadm.go:157] found existing configuration files:
	
	I0407 14:15:50.765267  308831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:15:50.774388  308831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:15:50.774441  308831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:15:50.783710  308831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:15:50.792577  308831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:15:50.792633  308831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:15:50.802813  308831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:15:50.811735  308831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:15:50.811788  308831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:15:50.820555  308831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:15:50.829705  308831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:15:50.829752  308831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:15:50.839810  308831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 14:15:50.849133  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:50.964318  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:52.072919  308831 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.108554265s)
	I0407 14:15:52.072960  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:52.328909  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:52.421835  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:52.499558  308831 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:15:52.499668  308831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:53.000158  308831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:53.500670  308831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:53.520865  308831 api_server.go:72] duration metric: took 1.021307622s to wait for apiserver process to appear ...
	I0407 14:15:53.520900  308831 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:15:53.520929  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:51.641164  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:51.655473  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:51.655548  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:51.690008  306360 cri.go:89] found id: ""
	I0407 14:15:51.690036  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.690047  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:51.690055  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:51.690118  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:51.728115  306360 cri.go:89] found id: ""
	I0407 14:15:51.728141  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.728150  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:51.728157  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:51.728222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:51.764117  306360 cri.go:89] found id: ""
	I0407 14:15:51.764156  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.764168  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:51.764180  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:51.764243  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:51.801243  306360 cri.go:89] found id: ""
	I0407 14:15:51.801279  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.801291  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:51.801299  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:51.801363  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:51.838262  306360 cri.go:89] found id: ""
	I0407 14:15:51.838292  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.838302  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:51.838310  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:51.838378  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:51.880251  306360 cri.go:89] found id: ""
	I0407 14:15:51.880284  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.880294  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:51.880302  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:51.880373  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:51.922175  306360 cri.go:89] found id: ""
	I0407 14:15:51.922203  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.922213  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:51.922220  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:51.922291  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:51.963932  306360 cri.go:89] found id: ""
	I0407 14:15:51.963960  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.963970  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:51.963985  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:51.964000  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:52.046274  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:52.046322  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:52.093979  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:52.094019  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:52.148613  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:52.148660  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:52.162525  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:52.162559  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:52.239788  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:54.740063  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:54.757191  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:54.757267  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:54.789524  306360 cri.go:89] found id: ""
	I0407 14:15:54.789564  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.789575  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:54.789584  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:54.789646  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:54.823746  306360 cri.go:89] found id: ""
	I0407 14:15:54.823785  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.823797  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:54.823805  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:54.823875  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:54.861371  306360 cri.go:89] found id: ""
	I0407 14:15:54.861406  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.861417  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:54.861424  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:54.861486  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:54.896286  306360 cri.go:89] found id: ""
	I0407 14:15:54.896318  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.896327  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:54.896334  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:54.896402  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:54.938594  306360 cri.go:89] found id: ""
	I0407 14:15:54.938632  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.938643  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:54.938651  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:54.938722  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:54.971701  306360 cri.go:89] found id: ""
	I0407 14:15:54.971737  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.971745  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:54.971751  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:54.971809  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:55.008651  306360 cri.go:89] found id: ""
	I0407 14:15:55.008682  306360 logs.go:282] 0 containers: []
	W0407 14:15:55.008693  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:55.008700  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:55.008768  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:55.043829  306360 cri.go:89] found id: ""
	I0407 14:15:55.043860  306360 logs.go:282] 0 containers: []
	W0407 14:15:55.043868  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:55.043879  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:55.043899  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:55.094682  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:55.094720  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:55.109798  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:55.109855  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:55.187514  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:55.187540  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:55.187555  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:55.273313  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:55.273360  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:56.021402  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 14:15:56.021428  308831 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 14:15:56.021442  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:56.066617  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 14:15:56.066650  308831 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 14:15:56.521245  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:56.526043  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:15:56.526070  308831 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:15:57.021581  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:57.026339  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:15:57.026365  308831 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:15:57.521022  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:57.525667  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I0407 14:15:57.532348  308831 api_server.go:141] control plane version: v1.32.2
	I0407 14:15:57.532377  308831 api_server.go:131] duration metric: took 4.011467673s to wait for apiserver health ...
	I0407 14:15:57.532391  308831 cni.go:84] Creating CNI manager for ""
	I0407 14:15:57.532400  308831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:15:57.534300  308831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 14:15:57.535520  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 14:15:57.547844  308831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 14:15:57.567595  308831 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:15:57.571906  308831 system_pods.go:59] 8 kube-system pods found
	I0407 14:15:57.571945  308831 system_pods.go:61] "coredns-668d6bf9bc-kwfnj" [c312b7f9-1687-4be6-ad08-27dca9ba736f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0407 14:15:57.571953  308831 system_pods.go:61] "etcd-newest-cni-541721" [42628491-612b-4295-88bb-07ac9eb7ab9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0407 14:15:57.571961  308831 system_pods.go:61] "kube-apiserver-newest-cni-541721" [07768ac0-2f44-4b96-bfe5-acfb91362045] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0407 14:15:57.571967  308831 system_pods.go:61] "kube-controller-manager-newest-cni-541721" [83a4f8c5-c745-47a9-9cc6-2456566c28a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 14:15:57.571978  308831 system_pods.go:61] "kube-proxy-crp62" [47febbe3-a277-4779-aee8-ba1c5433f21d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0407 14:15:57.571986  308831 system_pods.go:61] "kube-scheduler-newest-cni-541721" [5b4ee840-ac6a-4214-9179-5e6d5af9f764] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0407 14:15:57.571991  308831 system_pods.go:61] "metrics-server-f79f97bbb-kc7kt" [2484cb12-61a6-4de3-8dd6-bfcb4dcb5baa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 14:15:57.571999  308831 system_pods.go:61] "storage-provisioner" [e41f18c2-1442-463f-ae4b-bc47b254aa7a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0407 14:15:57.572004  308831 system_pods.go:74] duration metric: took 4.389672ms to wait for pod list to return data ...
	I0407 14:15:57.572014  308831 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:15:57.575009  308831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:15:57.575029  308831 node_conditions.go:123] node cpu capacity is 2
	I0407 14:15:57.575040  308831 node_conditions.go:105] duration metric: took 3.021612ms to run NodePressure ...
	I0407 14:15:57.575056  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:57.880816  308831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 14:15:57.894579  308831 ops.go:34] apiserver oom_adj: -16
	I0407 14:15:57.894607  308831 kubeadm.go:597] duration metric: took 7.220492712s to restartPrimaryControlPlane
	I0407 14:15:57.894619  308831 kubeadm.go:394] duration metric: took 7.268398637s to StartCluster
	I0407 14:15:57.894641  308831 settings.go:142] acquiring lock: {Name:mk4f0a46db7c57f47f856bd845390df879e08200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:15:57.894822  308831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:15:57.896037  308831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/kubeconfig: {Name:mkef4208e7f217ec5ec7c15cd00232eac7047b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:15:57.896384  308831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 14:15:57.896474  308831 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 14:15:57.896568  308831 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-541721"
	I0407 14:15:57.896589  308831 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-541721"
	W0407 14:15:57.896596  308831 addons.go:247] addon storage-provisioner should already be in state true
	I0407 14:15:57.896613  308831 addons.go:69] Setting default-storageclass=true in profile "newest-cni-541721"
	I0407 14:15:57.896638  308831 host.go:66] Checking if "newest-cni-541721" exists ...
	I0407 14:15:57.896625  308831 addons.go:69] Setting dashboard=true in profile "newest-cni-541721"
	I0407 14:15:57.896642  308831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-541721"
	I0407 14:15:57.896665  308831 addons.go:238] Setting addon dashboard=true in "newest-cni-541721"
	W0407 14:15:57.896675  308831 addons.go:247] addon dashboard should already be in state true
	I0407 14:15:57.896682  308831 addons.go:69] Setting metrics-server=true in profile "newest-cni-541721"
	I0407 14:15:57.896709  308831 host.go:66] Checking if "newest-cni-541721" exists ...
	I0407 14:15:57.896720  308831 addons.go:238] Setting addon metrics-server=true in "newest-cni-541721"
	W0407 14:15:57.896730  308831 addons.go:247] addon metrics-server should already be in state true
	I0407 14:15:57.896761  308831 host.go:66] Checking if "newest-cni-541721" exists ...
	I0407 14:15:57.897130  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.897144  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.897129  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.897179  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.897224  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.897170  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.897247  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.897289  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.897439  308831 config.go:182] Loaded profile config "newest-cni-541721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:15:57.898160  308831 out.go:177] * Verifying Kubernetes components...
	I0407 14:15:57.899427  308831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:15:57.914645  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34255
	I0407 14:15:57.914658  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I0407 14:15:57.915088  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.915221  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.915772  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.915789  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.915919  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.915929  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.916179  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.916232  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.916344  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.916804  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.916846  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.917048  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0407 14:15:57.917542  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.918163  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.918178  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.918569  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.919092  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.919123  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.919192  308831 addons.go:238] Setting addon default-storageclass=true in "newest-cni-541721"
	W0407 14:15:57.919205  308831 addons.go:247] addon default-storageclass should already be in state true
	I0407 14:15:57.919233  308831 host.go:66] Checking if "newest-cni-541721" exists ...
	I0407 14:15:57.919576  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.919605  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.920769  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0407 14:15:57.921236  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.921729  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.921752  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.922088  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.922572  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.922608  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.937572  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0407 14:15:57.937695  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42421
	I0407 14:15:57.938194  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.938660  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34627
	I0407 14:15:57.938863  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.938887  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.938963  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.939251  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.939620  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0407 14:15:57.939642  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.939848  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.939900  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.940021  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.940071  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.940086  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.940288  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.940312  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.940532  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.940651  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.940673  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.940694  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.940997  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.941226  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.941293  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.941418  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.943066  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:57.943556  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:57.944233  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:57.945868  308831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:15:57.945873  308831 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0407 14:15:57.945925  308831 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0407 14:15:57.947501  308831 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 14:15:57.947525  308831 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 14:15:57.947549  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:57.947592  308831 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 14:15:57.947606  308831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 14:15:57.947682  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:57.949194  308831 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0407 14:15:57.950596  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0407 14:15:57.950612  308831 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0407 14:15:57.950633  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:57.951106  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.951518  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:57.951536  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.951608  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.951691  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:57.951866  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:57.952012  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:57.952224  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:57.952336  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:57.952370  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.952455  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:57.952697  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:57.952854  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:57.952995  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:57.954108  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.954455  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:57.954482  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.954659  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:57.954827  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:57.954967  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:57.955093  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:57.975194  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0407 14:15:57.975616  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.976107  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.976139  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.976544  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.976751  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.978595  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:57.978824  308831 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 14:15:57.978842  308831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 14:15:57.978862  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:57.982043  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.982380  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:57.982410  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.982678  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:57.982840  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:57.982966  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:57.983081  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:58.102404  308831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:15:58.120015  308831 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:15:58.120102  308831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:58.135300  308831 api_server.go:72] duration metric: took 238.836482ms to wait for apiserver process to appear ...
	I0407 14:15:58.135329  308831 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:15:58.135349  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:58.141206  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I0407 14:15:58.142587  308831 api_server.go:141] control plane version: v1.32.2
	I0407 14:15:58.142606  308831 api_server.go:131] duration metric: took 7.270895ms to wait for apiserver health ...
	I0407 14:15:58.142614  308831 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:15:58.146900  308831 system_pods.go:59] 8 kube-system pods found
	I0407 14:15:58.146926  308831 system_pods.go:61] "coredns-668d6bf9bc-kwfnj" [c312b7f9-1687-4be6-ad08-27dca9ba736f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0407 14:15:58.146935  308831 system_pods.go:61] "etcd-newest-cni-541721" [42628491-612b-4295-88bb-07ac9eb7ab9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0407 14:15:58.146943  308831 system_pods.go:61] "kube-apiserver-newest-cni-541721" [07768ac0-2f44-4b96-bfe5-acfb91362045] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0407 14:15:58.146948  308831 system_pods.go:61] "kube-controller-manager-newest-cni-541721" [83a4f8c5-c745-47a9-9cc6-2456566c28a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 14:15:58.146955  308831 system_pods.go:61] "kube-proxy-crp62" [47febbe3-a277-4779-aee8-ba1c5433f21d] Running
	I0407 14:15:58.146961  308831 system_pods.go:61] "kube-scheduler-newest-cni-541721" [5b4ee840-ac6a-4214-9179-5e6d5af9f764] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0407 14:15:58.146966  308831 system_pods.go:61] "metrics-server-f79f97bbb-kc7kt" [2484cb12-61a6-4de3-8dd6-bfcb4dcb5baa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 14:15:58.146972  308831 system_pods.go:61] "storage-provisioner" [e41f18c2-1442-463f-ae4b-bc47b254aa7a] Running
	I0407 14:15:58.146978  308831 system_pods.go:74] duration metric: took 4.358597ms to wait for pod list to return data ...
	I0407 14:15:58.146986  308831 default_sa.go:34] waiting for default service account to be created ...
	I0407 14:15:58.150282  308831 default_sa.go:45] found service account: "default"
	I0407 14:15:58.150299  308831 default_sa.go:55] duration metric: took 3.303841ms for default service account to be created ...
	I0407 14:15:58.150309  308831 kubeadm.go:582] duration metric: took 253.863257ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0407 14:15:58.150322  308831 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:15:58.153173  308831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:15:58.153197  308831 node_conditions.go:123] node cpu capacity is 2
	I0407 14:15:58.153211  308831 node_conditions.go:105] duration metric: took 2.884813ms to run NodePressure ...
	I0407 14:15:58.153224  308831 start.go:241] waiting for startup goroutines ...
	I0407 14:15:58.193220  308831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 14:15:58.219746  308831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 14:15:58.279762  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0407 14:15:58.279792  308831 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0407 14:15:58.310829  308831 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 14:15:58.310854  308831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0407 14:15:58.365195  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0407 14:15:58.365223  308831 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0407 14:15:58.418268  308831 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 14:15:58.418311  308831 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 14:15:58.452087  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0407 14:15:58.452125  308831 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0407 14:15:58.472397  308831 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 14:15:58.472435  308831 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 14:15:58.493767  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0407 14:15:58.493792  308831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0407 14:15:58.538632  308831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 14:15:58.591626  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0407 14:15:58.591661  308831 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0407 14:15:58.674454  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0407 14:15:58.674490  308831 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0407 14:15:58.705316  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0407 14:15:58.705355  308831 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0407 14:15:58.728819  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0407 14:15:58.728849  308831 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0407 14:15:58.748297  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 14:15:58.748328  308831 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0407 14:15:58.771377  308831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 14:15:59.673041  308831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.453258343s)
	I0407 14:15:59.673107  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.673119  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.673482  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.673507  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.673518  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.673527  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.673768  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.673788  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.673805  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:15:59.674036  308831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.480774359s)
	I0407 14:15:59.674082  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.674098  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.674344  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.674361  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.674372  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.674387  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.674683  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.674696  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.674710  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:15:59.695131  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.695152  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.695501  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.695523  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.695537  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:16:00.090200  308831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.55151201s)
	I0407 14:16:00.090258  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:16:00.090283  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:16:00.090628  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:16:00.090645  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:16:00.090662  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:16:00.090672  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:16:00.090678  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:16:00.090980  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:16:00.090989  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:16:00.090997  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:16:00.091007  308831 addons.go:479] Verifying addon metrics-server=true in "newest-cni-541721"
	I0407 14:16:00.245449  308831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.473999327s)
	I0407 14:16:00.245510  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:16:00.245527  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:16:00.245797  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:16:00.245858  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:16:00.245882  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:16:00.245894  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:16:00.245895  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:16:00.246148  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:16:00.246165  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:16:00.247614  308831 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-541721 addons enable metrics-server
	
	I0407 14:16:00.248959  308831 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0407 14:16:00.250078  308831 addons.go:514] duration metric: took 2.353612079s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0407 14:16:00.250126  308831 start.go:246] waiting for cluster config update ...
	I0407 14:16:00.250153  308831 start.go:255] writing updated cluster config ...
	I0407 14:16:00.250500  308831 ssh_runner.go:195] Run: rm -f paused
	I0407 14:16:00.299045  308831 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 14:16:00.300679  308831 out.go:177] * Done! kubectl is now configured to use "newest-cni-541721" cluster and "default" namespace by default
	I0407 14:15:57.811712  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:57.825529  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:57.825597  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:57.863098  306360 cri.go:89] found id: ""
	I0407 14:15:57.863139  306360 logs.go:282] 0 containers: []
	W0407 14:15:57.863152  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:57.863160  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:57.863231  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:57.902011  306360 cri.go:89] found id: ""
	I0407 14:15:57.902049  306360 logs.go:282] 0 containers: []
	W0407 14:15:57.902059  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:57.902067  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:57.902134  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:57.965448  306360 cri.go:89] found id: ""
	I0407 14:15:57.965475  306360 logs.go:282] 0 containers: []
	W0407 14:15:57.965485  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:57.965492  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:57.965554  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:58.012478  306360 cri.go:89] found id: ""
	I0407 14:15:58.012508  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.012519  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:58.012528  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:58.012591  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:58.046324  306360 cri.go:89] found id: ""
	I0407 14:15:58.046352  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.046359  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:58.046365  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:58.046416  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:58.082655  306360 cri.go:89] found id: ""
	I0407 14:15:58.082690  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.082701  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:58.082771  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:58.082845  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:58.117888  306360 cri.go:89] found id: ""
	I0407 14:15:58.117917  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.117929  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:58.117936  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:58.118002  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:58.158074  306360 cri.go:89] found id: ""
	I0407 14:15:58.158100  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.158110  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:58.158122  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:58.158140  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:58.250799  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:58.250823  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:58.250839  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:58.331250  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:58.331289  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:58.373589  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:58.373616  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:58.441487  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:58.441523  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:00.956209  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:00.969519  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:00.969597  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:01.006091  306360 cri.go:89] found id: ""
	I0407 14:16:01.006123  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.006134  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:01.006142  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:01.006208  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:01.040220  306360 cri.go:89] found id: ""
	I0407 14:16:01.040251  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.040262  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:01.040271  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:01.040341  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:01.075777  306360 cri.go:89] found id: ""
	I0407 14:16:01.075813  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.075824  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:01.075829  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:01.075904  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:01.113161  306360 cri.go:89] found id: ""
	I0407 14:16:01.113188  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.113196  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:01.113202  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:01.113264  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:01.145743  306360 cri.go:89] found id: ""
	I0407 14:16:01.145781  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.145793  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:01.145800  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:01.145891  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:01.180531  306360 cri.go:89] found id: ""
	I0407 14:16:01.180564  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.180576  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:01.180585  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:01.180651  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:01.219646  306360 cri.go:89] found id: ""
	I0407 14:16:01.219679  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.219691  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:01.219699  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:01.219765  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:01.262312  306360 cri.go:89] found id: ""
	I0407 14:16:01.262345  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.262352  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:01.262363  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:01.262377  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:01.339749  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:01.339783  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:01.382985  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:01.383022  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:01.434889  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:01.434921  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:01.451353  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:01.451378  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:01.532064  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:04.032625  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:04.045945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:04.046004  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:04.079093  306360 cri.go:89] found id: ""
	I0407 14:16:04.079123  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.079134  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:04.079143  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:04.079206  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:04.114148  306360 cri.go:89] found id: ""
	I0407 14:16:04.114181  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.114192  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:04.114200  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:04.114270  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:04.152718  306360 cri.go:89] found id: ""
	I0407 14:16:04.152747  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.152758  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:04.152766  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:04.152841  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:04.190031  306360 cri.go:89] found id: ""
	I0407 14:16:04.190065  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.190077  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:04.190085  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:04.190163  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:04.227623  306360 cri.go:89] found id: ""
	I0407 14:16:04.227660  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.227671  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:04.227679  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:04.227747  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:04.268005  306360 cri.go:89] found id: ""
	I0407 14:16:04.268035  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.268047  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:04.268055  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:04.268125  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:04.304340  306360 cri.go:89] found id: ""
	I0407 14:16:04.304364  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.304374  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:04.304381  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:04.304456  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:04.341425  306360 cri.go:89] found id: ""
	I0407 14:16:04.341490  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.341502  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:04.341513  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:04.341526  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:04.398148  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:04.398179  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:04.414586  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:04.414612  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:04.482621  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:04.482650  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:04.482669  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:04.556315  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:04.556359  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:07.115968  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:07.129613  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:07.129672  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:07.167142  306360 cri.go:89] found id: ""
	I0407 14:16:07.167170  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.167180  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:07.167187  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:07.167246  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:07.198691  306360 cri.go:89] found id: ""
	I0407 14:16:07.198723  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.198730  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:07.198736  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:07.198790  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:07.231226  306360 cri.go:89] found id: ""
	I0407 14:16:07.231259  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.231268  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:07.231274  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:07.231326  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:07.263714  306360 cri.go:89] found id: ""
	I0407 14:16:07.263746  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.263757  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:07.263765  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:07.263828  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:07.301046  306360 cri.go:89] found id: ""
	I0407 14:16:07.301079  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.301090  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:07.301098  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:07.301189  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:07.333910  306360 cri.go:89] found id: ""
	I0407 14:16:07.333938  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.333948  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:07.333956  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:07.334023  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:07.366899  306360 cri.go:89] found id: ""
	I0407 14:16:07.366927  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.366937  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:07.366945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:07.367014  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:07.398845  306360 cri.go:89] found id: ""
	I0407 14:16:07.398878  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.398887  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:07.398899  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:07.398912  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:07.411632  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:07.411663  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:07.478836  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:07.478865  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:07.478883  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:07.557802  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:07.557852  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:07.602752  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:07.602785  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:10.155705  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:10.169146  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:10.169232  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:10.202657  306360 cri.go:89] found id: ""
	I0407 14:16:10.202694  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.202702  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:10.202708  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:10.202761  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:10.238239  306360 cri.go:89] found id: ""
	I0407 14:16:10.238272  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.238284  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:10.238292  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:10.238363  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:10.270804  306360 cri.go:89] found id: ""
	I0407 14:16:10.270833  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.270840  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:10.270847  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:10.270897  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:10.319453  306360 cri.go:89] found id: ""
	I0407 14:16:10.319491  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.319502  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:10.319510  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:10.319581  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:10.352622  306360 cri.go:89] found id: ""
	I0407 14:16:10.352654  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.352663  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:10.352670  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:10.352741  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:10.385869  306360 cri.go:89] found id: ""
	I0407 14:16:10.385897  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.385906  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:10.385912  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:10.385979  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:10.420689  306360 cri.go:89] found id: ""
	I0407 14:16:10.420715  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.420724  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:10.420729  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:10.420786  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:10.454182  306360 cri.go:89] found id: ""
	I0407 14:16:10.454210  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.454226  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:10.454238  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:10.454258  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:10.467987  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:10.468021  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:10.535621  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:10.535650  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:10.535663  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:10.613921  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:10.613963  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:10.663267  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:10.663299  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:13.220167  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:13.234197  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:13.234271  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:13.273116  306360 cri.go:89] found id: ""
	I0407 14:16:13.273159  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.273174  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:13.273180  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:13.273236  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:13.309984  306360 cri.go:89] found id: ""
	I0407 14:16:13.310024  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.310036  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:13.310044  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:13.310110  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:13.343107  306360 cri.go:89] found id: ""
	I0407 14:16:13.343145  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.343156  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:13.343162  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:13.343226  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:13.375826  306360 cri.go:89] found id: ""
	I0407 14:16:13.375857  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.375865  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:13.375871  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:13.375934  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:13.408895  306360 cri.go:89] found id: ""
	I0407 14:16:13.408930  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.408940  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:13.408945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:13.409002  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:13.442272  306360 cri.go:89] found id: ""
	I0407 14:16:13.442309  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.442319  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:13.442329  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:13.442395  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:13.478556  306360 cri.go:89] found id: ""
	I0407 14:16:13.478592  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.478600  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:13.478606  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:13.478671  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:13.512229  306360 cri.go:89] found id: ""
	I0407 14:16:13.512264  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.512274  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:13.512287  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:13.512304  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:13.561858  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:13.561899  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:13.575518  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:13.575549  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:13.638490  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:13.638515  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:13.638528  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:13.714178  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:13.714219  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:16.252354  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:16.265849  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:16.265939  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:16.298742  306360 cri.go:89] found id: ""
	I0407 14:16:16.298774  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.298781  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:16.298788  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:16.298844  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:16.332441  306360 cri.go:89] found id: ""
	I0407 14:16:16.332476  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.332487  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:16.332496  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:16.332563  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:16.365820  306360 cri.go:89] found id: ""
	I0407 14:16:16.365857  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.365868  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:16.365880  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:16.365972  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:16.399094  306360 cri.go:89] found id: ""
	I0407 14:16:16.399125  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.399134  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:16.399140  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:16.399193  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:16.433322  306360 cri.go:89] found id: ""
	I0407 14:16:16.433356  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.433364  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:16.433372  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:16.433428  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:16.466435  306360 cri.go:89] found id: ""
	I0407 14:16:16.466466  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.466476  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:16.466484  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:16.466551  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:16.498858  306360 cri.go:89] found id: ""
	I0407 14:16:16.498887  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.498895  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:16.498900  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:16.498952  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:16.531126  306360 cri.go:89] found id: ""
	I0407 14:16:16.531166  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.531177  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:16.531192  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:16.531206  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:16.610817  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:16.610857  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:16.650145  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:16.650180  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:16.699735  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:16.699821  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:16.719603  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:16.719634  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:16.813399  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:19.315126  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:19.327908  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:19.327993  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:19.361834  306360 cri.go:89] found id: ""
	I0407 14:16:19.361868  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.361877  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:19.361883  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:19.361947  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:19.396519  306360 cri.go:89] found id: ""
	I0407 14:16:19.396554  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.396565  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:19.396573  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:19.396645  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:19.431627  306360 cri.go:89] found id: ""
	I0407 14:16:19.431656  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.431665  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:19.431671  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:19.431741  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:19.465284  306360 cri.go:89] found id: ""
	I0407 14:16:19.465315  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.465323  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:19.465332  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:19.465393  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:19.497940  306360 cri.go:89] found id: ""
	I0407 14:16:19.497970  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.497984  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:19.497991  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:19.498060  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:19.533336  306360 cri.go:89] found id: ""
	I0407 14:16:19.533376  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.533389  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:19.533398  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:19.533469  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:19.568026  306360 cri.go:89] found id: ""
	I0407 14:16:19.568059  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.568076  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:19.568084  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:19.568153  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:19.601780  306360 cri.go:89] found id: ""
	I0407 14:16:19.601835  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.601844  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:19.601854  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:19.601865  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:19.642543  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:19.642574  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:19.692073  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:19.692119  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:19.705748  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:19.705783  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:19.772531  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:19.772556  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:19.772577  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:22.351857  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:22.365447  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:22.365514  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:22.403999  306360 cri.go:89] found id: ""
	I0407 14:16:22.404028  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.404036  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:22.404043  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:22.404094  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:22.441384  306360 cri.go:89] found id: ""
	I0407 14:16:22.441417  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.441426  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:22.441432  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:22.441487  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:22.490577  306360 cri.go:89] found id: ""
	I0407 14:16:22.490610  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.490621  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:22.490628  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:22.490714  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:22.537991  306360 cri.go:89] found id: ""
	I0407 14:16:22.538028  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.538040  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:22.538049  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:22.538120  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:22.584777  306360 cri.go:89] found id: ""
	I0407 14:16:22.584812  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.584824  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:22.584832  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:22.584920  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:22.627558  306360 cri.go:89] found id: ""
	I0407 14:16:22.627588  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.627596  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:22.627602  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:22.627665  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:22.664048  306360 cri.go:89] found id: ""
	I0407 14:16:22.664080  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.664089  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:22.664125  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:22.664180  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:22.697281  306360 cri.go:89] found id: ""
	I0407 14:16:22.697318  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.697329  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:22.697345  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:22.697360  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:22.750380  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:22.750418  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:22.764135  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:22.764163  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:22.830720  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:22.830756  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:22.830775  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:22.910687  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:22.910728  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:25.452699  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:25.466127  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:25.466217  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:25.503288  306360 cri.go:89] found id: ""
	I0407 14:16:25.503320  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.503329  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:25.503335  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:25.503395  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:25.535855  306360 cri.go:89] found id: ""
	I0407 14:16:25.535891  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.535900  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:25.535907  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:25.535969  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:25.569103  306360 cri.go:89] found id: ""
	I0407 14:16:25.569135  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.569143  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:25.569149  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:25.569201  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:25.604482  306360 cri.go:89] found id: ""
	I0407 14:16:25.604521  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.604533  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:25.604542  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:25.604600  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:25.638915  306360 cri.go:89] found id: ""
	I0407 14:16:25.638948  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.638958  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:25.638966  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:25.639042  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:25.673087  306360 cri.go:89] found id: ""
	I0407 14:16:25.673122  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.673134  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:25.673141  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:25.673211  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:25.706454  306360 cri.go:89] found id: ""
	I0407 14:16:25.706490  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.706502  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:25.706511  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:25.706596  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:25.739824  306360 cri.go:89] found id: ""
	I0407 14:16:25.739861  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.739872  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:25.739885  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:25.739900  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:25.818002  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:25.818045  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:25.866681  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:25.866715  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:25.920791  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:25.920824  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:25.934838  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:25.934870  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:26.005417  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:28.507450  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:28.526968  306360 kubeadm.go:597] duration metric: took 4m4.425341549s to restartPrimaryControlPlane
	W0407 14:16:28.527068  306360 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0407 14:16:28.527097  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 14:16:33.604963  306360 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.077840903s)
	I0407 14:16:33.605045  306360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:16:33.619392  306360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 14:16:33.629694  306360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:16:33.639997  306360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:16:33.640021  306360 kubeadm.go:157] found existing configuration files:
	
	I0407 14:16:33.640070  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:16:33.648891  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:16:33.648942  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:16:33.657964  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:16:33.666862  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:16:33.666907  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:16:33.675917  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:16:33.684806  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:16:33.684865  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:16:33.694385  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:16:33.703347  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:16:33.703399  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:16:33.712413  306360 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 14:16:33.785507  306360 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 14:16:33.785591  306360 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 14:16:33.919661  306360 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 14:16:33.919797  306360 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 14:16:33.919913  306360 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 14:16:34.088006  306360 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:16:34.090058  306360 out.go:235]   - Generating certificates and keys ...
	I0407 14:16:34.090179  306360 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 14:16:34.090273  306360 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 14:16:34.090394  306360 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 14:16:34.090467  306360 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 14:16:34.090559  306360 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 14:16:34.090629  306360 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 14:16:34.090692  306360 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 14:16:34.090745  306360 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 14:16:34.090963  306360 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 14:16:34.091371  306360 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 14:16:34.091513  306360 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 14:16:34.091573  306360 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:16:34.250084  306360 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:16:34.456551  306360 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:16:34.600069  306360 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:16:34.730872  306360 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:16:34.745839  306360 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:16:34.748203  306360 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:16:34.748481  306360 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 14:16:34.899583  306360 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:16:34.901383  306360 out.go:235]   - Booting up control plane ...
	I0407 14:16:34.901512  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:16:34.910634  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:16:34.913019  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:16:34.913965  306360 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:16:34.916441  306360 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 14:17:14.918244  306360 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 14:17:14.918361  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:14.918550  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:17:19.918793  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:19.919063  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:17:29.919626  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:29.919857  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:17:49.920620  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:49.920914  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:18:29.922713  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:18:29.922989  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:18:29.923024  306360 kubeadm.go:310] 
	I0407 14:18:29.923100  306360 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 14:18:29.923192  306360 kubeadm.go:310] 		timed out waiting for the condition
	I0407 14:18:29.923212  306360 kubeadm.go:310] 
	I0407 14:18:29.923266  306360 kubeadm.go:310] 	This error is likely caused by:
	I0407 14:18:29.923310  306360 kubeadm.go:310] 		- The kubelet is not running
	I0407 14:18:29.923461  306360 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 14:18:29.923472  306360 kubeadm.go:310] 
	I0407 14:18:29.923695  306360 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 14:18:29.923740  306360 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 14:18:29.923826  306360 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 14:18:29.923853  306360 kubeadm.go:310] 
	I0407 14:18:29.924004  306360 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 14:18:29.924126  306360 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 14:18:29.924136  306360 kubeadm.go:310] 
	I0407 14:18:29.924282  306360 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 14:18:29.924392  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 14:18:29.924528  306360 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 14:18:29.924627  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 14:18:29.924654  306360 kubeadm.go:310] 
	I0407 14:18:29.924807  306360 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:18:29.924945  306360 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 14:18:29.925037  306360 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0407 14:18:29.925275  306360 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0407 14:18:29.925332  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 14:18:35.351481  306360 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.426121458s)
	I0407 14:18:35.351559  306360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:18:35.365827  306360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:18:35.376549  306360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:18:35.376577  306360 kubeadm.go:157] found existing configuration files:
	
	I0407 14:18:35.376637  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:18:35.386629  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:18:35.386696  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:18:35.397247  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:18:35.406945  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:18:35.407018  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:18:35.416924  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:18:35.426596  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:18:35.426665  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:18:35.436695  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:18:35.446316  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:18:35.446368  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:18:35.455990  306360 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 14:18:35.529786  306360 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 14:18:35.529882  306360 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 14:18:35.669860  306360 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 14:18:35.670044  306360 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 14:18:35.670206  306360 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 14:18:35.849445  306360 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:18:35.856509  306360 out.go:235]   - Generating certificates and keys ...
	I0407 14:18:35.856606  306360 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 14:18:35.856681  306360 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 14:18:35.856771  306360 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 14:18:35.856853  306360 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 14:18:35.856956  306360 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 14:18:35.857016  306360 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 14:18:35.857075  306360 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 14:18:35.857126  306360 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 14:18:35.857196  306360 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 14:18:35.857268  306360 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 14:18:35.857304  306360 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 14:18:35.857357  306360 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:18:35.974809  306360 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:18:36.175364  306360 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:18:36.293266  306360 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:18:36.465625  306360 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:18:36.480525  306360 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:18:36.481848  306360 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:18:36.481922  306360 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 14:18:36.613415  306360 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:18:36.615110  306360 out.go:235]   - Booting up control plane ...
	I0407 14:18:36.615269  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:18:36.628134  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:18:36.629532  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:18:36.630589  306360 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:18:36.634513  306360 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 14:19:16.636775  306360 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 14:19:16.637057  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:16.637316  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:19:21.638264  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:21.638529  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:19:31.638701  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:31.638962  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:19:51.638889  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:51.639128  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:20:31.638384  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:20:31.638644  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:20:31.638668  306360 kubeadm.go:310] 
	I0407 14:20:31.638702  306360 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 14:20:31.638742  306360 kubeadm.go:310] 		timed out waiting for the condition
	I0407 14:20:31.638748  306360 kubeadm.go:310] 
	I0407 14:20:31.638775  306360 kubeadm.go:310] 	This error is likely caused by:
	I0407 14:20:31.638810  306360 kubeadm.go:310] 		- The kubelet is not running
	I0407 14:20:31.638898  306360 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 14:20:31.638904  306360 kubeadm.go:310] 
	I0407 14:20:31.638985  306360 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 14:20:31.639023  306360 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 14:20:31.639065  306360 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 14:20:31.639072  306360 kubeadm.go:310] 
	I0407 14:20:31.639203  306360 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 14:20:31.639327  306360 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 14:20:31.639358  306360 kubeadm.go:310] 
	I0407 14:20:31.639513  306360 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 14:20:31.639633  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 14:20:31.639734  306360 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 14:20:31.639862  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 14:20:31.639875  306360 kubeadm.go:310] 
	I0407 14:20:31.640981  306360 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:20:31.641122  306360 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 14:20:31.641237  306360 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0407 14:20:31.641301  306360 kubeadm.go:394] duration metric: took 8m7.609204589s to StartCluster
	I0407 14:20:31.641373  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:20:31.641452  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:20:31.685303  306360 cri.go:89] found id: ""
	I0407 14:20:31.685334  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.685345  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:20:31.685353  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:20:31.685419  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:20:31.719244  306360 cri.go:89] found id: ""
	I0407 14:20:31.719274  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.719285  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:20:31.719293  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:20:31.719367  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:20:31.753252  306360 cri.go:89] found id: ""
	I0407 14:20:31.753282  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.753292  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:20:31.753299  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:20:31.753366  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:20:31.783957  306360 cri.go:89] found id: ""
	I0407 14:20:31.784001  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.784014  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:20:31.784024  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:20:31.784113  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:20:31.819615  306360 cri.go:89] found id: ""
	I0407 14:20:31.819652  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.819660  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:20:31.819666  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:20:31.819730  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:20:31.855903  306360 cri.go:89] found id: ""
	I0407 14:20:31.855942  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.855954  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:20:31.855962  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:20:31.856028  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:20:31.890988  306360 cri.go:89] found id: ""
	I0407 14:20:31.891018  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.891027  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:20:31.891033  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:20:31.891086  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:20:31.924794  306360 cri.go:89] found id: ""
	I0407 14:20:31.924827  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.924837  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:20:31.924861  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:20:31.924876  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:20:31.972904  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:20:31.972948  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:20:31.988056  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:20:31.988090  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:20:32.061617  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:20:32.061657  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:20:32.061672  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:20:32.165554  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:20:32.165600  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0407 14:20:32.208010  306360 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0407 14:20:32.208080  306360 out.go:270] * 
	W0407 14:20:32.208169  306360 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 14:20:32.208186  306360 out.go:270] * 
	W0407 14:20:32.209134  306360 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 14:20:32.213132  306360 out.go:201] 
	W0407 14:20:32.214433  306360 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 14:20:32.214485  306360 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0407 14:20:32.214528  306360 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0407 14:20:32.216101  306360 out.go:201] 
	
	
	==> CRI-O <==
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.197489226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744035633197467690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98c1ed44-4106-43e3-8b41-45b9354a14d0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.198062925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd848703-87fb-46a5-afbc-6b8132cd33ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.198111646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd848703-87fb-46a5-afbc-6b8132cd33ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.198140409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dd848703-87fb-46a5-afbc-6b8132cd33ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.227711775Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a64d106-f1ca-425f-8943-d79aee09284e name=/runtime.v1.RuntimeService/Version
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.227790691Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a64d106-f1ca-425f-8943-d79aee09284e name=/runtime.v1.RuntimeService/Version
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.228795686Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bbd44575-21ad-47d4-a99f-0adf4507b1c0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.229201528Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744035633229178833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbd44575-21ad-47d4-a99f-0adf4507b1c0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.229631877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=caf2fdce-a7f2-447c-83c8-cc75f361e282 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.229679155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=caf2fdce-a7f2-447c-83c8-cc75f361e282 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.229709512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=caf2fdce-a7f2-447c-83c8-cc75f361e282 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.260716714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71fa7f9d-5e32-41de-80f8-780ab917acbd name=/runtime.v1.RuntimeService/Version
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.260791541Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71fa7f9d-5e32-41de-80f8-780ab917acbd name=/runtime.v1.RuntimeService/Version
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.262486670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31ef3e58-9827-4589-9faa-5fba4426850f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.262892540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744035633262872085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31ef3e58-9827-4589-9faa-5fba4426850f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.263554386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a475cdb-026f-4713-9ee1-a909594933d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.263606500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a475cdb-026f-4713-9ee1-a909594933d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.263639043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0a475cdb-026f-4713-9ee1-a909594933d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.294959135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7fcf6813-0dff-4f4c-bd69-60a15704f455 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.295077653Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7fcf6813-0dff-4f4c-bd69-60a15704f455 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.296570019Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a4e5335-75c4-4643-ae08-7996a338e23a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.296945399Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744035633296923418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a4e5335-75c4-4643-ae08-7996a338e23a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.297666582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f3664d9-bb73-4e86-88f4-2e4908664e2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.297736051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f3664d9-bb73-4e86-88f4-2e4908664e2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:20:33 old-k8s-version-405646 crio[630]: time="2025-04-07 14:20:33.297765875Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7f3664d9-bb73-4e86-88f4-2e4908664e2f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 7 14:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053325] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041250] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr 7 14:12] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.811633] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.641709] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.228522] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.053600] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065282] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.177286] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.157141] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.250668] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +7.115917] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.069863] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.742427] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +13.578914] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 7 14:16] systemd-fstab-generator[5079]: Ignoring "noauto" option for root device
	[Apr 7 14:18] systemd-fstab-generator[5365]: Ignoring "noauto" option for root device
	[  +0.067537] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:20:33 up 8 min,  0 users,  load average: 0.00, 0.04, 0.02
	Linux old-k8s-version-405646 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000b68b40)
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]: goroutine 160 [select]:
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000cabef0, 0x4f0ac20, 0xc00098d590, 0x1, 0xc0001020c0)
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000932c40, 0xc0001020c0)
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b269f0, 0xc000b76720)
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 07 14:20:31 old-k8s-version-405646 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 07 14:20:31 old-k8s-version-405646 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 07 14:20:31 old-k8s-version-405646 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 07 14:20:32 old-k8s-version-405646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 07 14:20:32 old-k8s-version-405646 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 07 14:20:32 old-k8s-version-405646 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 07 14:20:32 old-k8s-version-405646 kubelet[5611]: I0407 14:20:32.305468    5611 server.go:416] Version: v1.20.0
	Apr 07 14:20:32 old-k8s-version-405646 kubelet[5611]: I0407 14:20:32.305933    5611 server.go:837] Client rotation is on, will bootstrap in background
	Apr 07 14:20:32 old-k8s-version-405646 kubelet[5611]: I0407 14:20:32.308898    5611 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 07 14:20:32 old-k8s-version-405646 kubelet[5611]: W0407 14:20:32.310077    5611 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 07 14:20:32 old-k8s-version-405646 kubelet[5611]: I0407 14:20:32.310167    5611 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405646 -n old-k8s-version-405646
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 2 (230.572394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-405646" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (518.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:21:00.586140  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:21:01.002798  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:21:05.920708  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/default-k8s-diff-port-718753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:22:00.209628  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:22:06.075303  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/no-preload-421325/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:22:33.777559  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/no-preload-421325/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:22:47.434621  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:23:18.329713  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:23:22.060208  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/default-k8s-diff-port-718753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:23:22.631147  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:23:23.274114  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:23:49.762808  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/default-k8s-diff-port-718753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:23:51.955538  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:24:41.394202  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:24:45.697228  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:25:15.021534  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:25:16.470776  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:25:25.490933  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:25:50.519181  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:26:00.586147  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:26:01.002756  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:26:39.536178  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:26:48.558420  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:27:00.209434  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:27:06.075017  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/no-preload-421325/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:27:24.067496  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:27:47.434190  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:28:18.329303  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:28:22.059799  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/default-k8s-diff-port-718753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:28:22.631931  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:28:51.955516  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405646 -n old-k8s-version-405646
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 2 (233.211539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-405646" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 2 (217.358625ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-405646 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-574417 image list                          | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-574417                                  | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-574417                                  | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-574417                                  | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	| delete  | -p embed-certs-574417                                  | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	| start   | -p newest-cni-541721 --memory=2200 --alsologtostderr   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:15 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | no-preload-421325 image list                           | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-421325                                   | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-421325                                   | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-421325                                   | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	| delete  | -p no-preload-421325                                   | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	| addons  | enable metrics-server -p newest-cni-541721             | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-718753                           | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | default-k8s-diff-port-718753                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | default-k8s-diff-port-718753                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | default-k8s-diff-port-718753                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | default-k8s-diff-port-718753                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-541721                  | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-541721 --memory=2200 --alsologtostderr   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-541721 image list                           | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	| delete  | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 14:15:25
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 14:15:25.628644  308831 out.go:345] Setting OutFile to fd 1 ...
	I0407 14:15:25.628943  308831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:15:25.628954  308831 out.go:358] Setting ErrFile to fd 2...
	I0407 14:15:25.628958  308831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:15:25.629163  308831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 14:15:25.629716  308831 out.go:352] Setting JSON to false
	I0407 14:15:25.630676  308831 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":21473,"bootTime":1744013853,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 14:15:25.630790  308831 start.go:139] virtualization: kvm guest
	I0407 14:15:25.632653  308831 out.go:177] * [newest-cni-541721] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 14:15:25.634114  308831 notify.go:220] Checking for updates...
	I0407 14:15:25.634125  308831 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 14:15:25.635477  308831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 14:15:25.636815  308831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:15:25.638126  308831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:15:25.639208  308831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 14:15:25.640304  308831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 14:15:25.642142  308831 config.go:182] Loaded profile config "newest-cni-541721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:15:25.642732  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:25.642805  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:25.658473  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45483
	I0407 14:15:25.659219  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:25.659736  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:25.659760  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:25.660180  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:25.660352  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:25.660628  308831 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 14:15:25.660918  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:25.660962  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:25.676620  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42011
	I0407 14:15:25.677061  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:25.677654  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:25.677687  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:25.678106  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:25.678327  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:25.714508  308831 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 14:15:25.715654  308831 start.go:297] selected driver: kvm2
	I0407 14:15:25.715669  308831 start.go:901] validating driver "kvm2" against &{Name:newest-cni-541721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-541721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:15:25.715769  308831 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 14:15:25.716608  308831 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:15:25.716681  308831 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 14:15:25.731568  308831 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 14:15:25.731948  308831 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0407 14:15:25.731981  308831 cni.go:84] Creating CNI manager for ""
	I0407 14:15:25.732021  308831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:15:25.732057  308831 start.go:340] cluster config:
	{Name:newest-cni-541721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-541721 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:15:25.732169  308831 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:15:25.734706  308831 out.go:177] * Starting "newest-cni-541721" primary control-plane node in "newest-cni-541721" cluster
	I0407 14:15:25.736251  308831 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 14:15:25.736285  308831 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 14:15:25.736295  308831 cache.go:56] Caching tarball of preloaded images
	I0407 14:15:25.736375  308831 preload.go:172] Found /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 14:15:25.736390  308831 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 14:15:25.736522  308831 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/config.json ...
	I0407 14:15:25.736737  308831 start.go:360] acquireMachinesLock for newest-cni-541721: {Name:mkbc0d9211b04d7c322a45485d144adcd6ee59fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:15:25.736784  308831 start.go:364] duration metric: took 28.182µs to acquireMachinesLock for "newest-cni-541721"
	I0407 14:15:25.736805  308831 start.go:96] Skipping create...Using existing machine configuration
	I0407 14:15:25.736811  308831 fix.go:54] fixHost starting: 
	I0407 14:15:25.737111  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:25.737147  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:25.751728  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I0407 14:15:25.752219  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:25.752697  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:25.752718  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:25.753019  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:25.753228  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:25.753385  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:25.754926  308831 fix.go:112] recreateIfNeeded on newest-cni-541721: state=Stopped err=<nil>
	I0407 14:15:25.754953  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	W0407 14:15:25.755089  308831 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 14:15:25.757704  308831 out.go:177] * Restarting existing kvm2 VM for "newest-cni-541721" ...
	I0407 14:15:20.896637  306360 cri.go:89] found id: ""
	I0407 14:15:20.896666  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.896673  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:20.896679  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:20.896737  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:20.937796  306360 cri.go:89] found id: ""
	I0407 14:15:20.937828  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.937837  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:20.937843  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:20.937896  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:20.983104  306360 cri.go:89] found id: ""
	I0407 14:15:20.983138  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.983149  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:20.983157  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:20.983222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:21.024555  306360 cri.go:89] found id: ""
	I0407 14:15:21.024591  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.024602  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:21.024609  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:21.024685  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:21.068400  306360 cri.go:89] found id: ""
	I0407 14:15:21.068484  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.068495  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:21.068502  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:21.068572  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:21.107962  306360 cri.go:89] found id: ""
	I0407 14:15:21.107990  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.107998  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:21.108004  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:21.108067  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:21.147955  306360 cri.go:89] found id: ""
	I0407 14:15:21.147981  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.147989  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:21.147999  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:21.148010  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:21.164790  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:21.164818  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:21.236045  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:21.236068  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:21.236081  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:21.313784  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:21.313821  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:21.357183  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:21.357215  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:23.907736  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:23.921413  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:23.921481  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:23.959486  306360 cri.go:89] found id: ""
	I0407 14:15:23.959513  306360 logs.go:282] 0 containers: []
	W0407 14:15:23.959520  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:23.959526  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:23.959585  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:23.992912  306360 cri.go:89] found id: ""
	I0407 14:15:23.992938  306360 logs.go:282] 0 containers: []
	W0407 14:15:23.992946  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:23.992952  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:23.993010  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:24.024279  306360 cri.go:89] found id: ""
	I0407 14:15:24.024308  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.024316  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:24.024323  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:24.024376  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:24.062320  306360 cri.go:89] found id: ""
	I0407 14:15:24.062353  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.062362  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:24.062371  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:24.062432  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:24.122748  306360 cri.go:89] found id: ""
	I0407 14:15:24.122774  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.122782  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:24.122787  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:24.122857  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:24.156773  306360 cri.go:89] found id: ""
	I0407 14:15:24.156803  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.156814  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:24.156831  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:24.156899  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:24.192903  306360 cri.go:89] found id: ""
	I0407 14:15:24.192940  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.192952  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:24.192960  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:24.193017  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:24.228041  306360 cri.go:89] found id: ""
	I0407 14:15:24.228081  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.228093  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:24.228105  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:24.228122  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:24.276177  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:24.276212  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:24.289668  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:24.289701  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:24.356935  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:24.356962  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:24.356981  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:24.442103  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:24.442140  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:25.758835  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Start
	I0407 14:15:25.759008  308831 main.go:141] libmachine: (newest-cni-541721) starting domain...
	I0407 14:15:25.759031  308831 main.go:141] libmachine: (newest-cni-541721) ensuring networks are active...
	I0407 14:15:25.759774  308831 main.go:141] libmachine: (newest-cni-541721) Ensuring network default is active
	I0407 14:15:25.760125  308831 main.go:141] libmachine: (newest-cni-541721) Ensuring network mk-newest-cni-541721 is active
	I0407 14:15:25.760533  308831 main.go:141] libmachine: (newest-cni-541721) getting domain XML...
	I0407 14:15:25.761459  308831 main.go:141] libmachine: (newest-cni-541721) creating domain...
	I0407 14:15:26.961388  308831 main.go:141] libmachine: (newest-cni-541721) waiting for IP...
	I0407 14:15:26.962280  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:26.962679  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:26.962806  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:26.962715  308884 retry.go:31] will retry after 224.710577ms: waiting for domain to come up
	I0407 14:15:27.189309  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:27.189924  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:27.189984  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:27.189909  308884 retry.go:31] will retry after 298.222768ms: waiting for domain to come up
	I0407 14:15:27.489516  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:27.490094  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:27.490131  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:27.490026  308884 retry.go:31] will retry after 465.194234ms: waiting for domain to come up
	I0407 14:15:27.956675  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:27.957258  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:27.957283  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:27.957226  308884 retry.go:31] will retry after 534.441737ms: waiting for domain to come up
	I0407 14:15:28.493247  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:28.493782  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:28.493811  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:28.493750  308884 retry.go:31] will retry after 611.035562ms: waiting for domain to come up
	I0407 14:15:29.106699  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:29.107212  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:29.107234  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:29.107187  308884 retry.go:31] will retry after 705.783816ms: waiting for domain to come up
	I0407 14:15:29.814350  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:29.814874  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:29.814904  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:29.814847  308884 retry.go:31] will retry after 951.819617ms: waiting for domain to come up
	I0407 14:15:26.983553  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:26.996033  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:26.996104  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:27.029665  306360 cri.go:89] found id: ""
	I0407 14:15:27.029692  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.029700  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:27.029705  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:27.029756  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:27.069962  306360 cri.go:89] found id: ""
	I0407 14:15:27.069992  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.070000  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:27.070009  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:27.070074  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:27.112142  306360 cri.go:89] found id: ""
	I0407 14:15:27.112174  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.112182  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:27.112188  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:27.112240  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:27.152647  306360 cri.go:89] found id: ""
	I0407 14:15:27.152675  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.152685  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:27.152691  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:27.152743  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:27.188973  306360 cri.go:89] found id: ""
	I0407 14:15:27.189004  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.189015  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:27.189023  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:27.189099  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:27.228054  306360 cri.go:89] found id: ""
	I0407 14:15:27.228085  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.228095  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:27.228102  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:27.228164  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:27.262089  306360 cri.go:89] found id: ""
	I0407 14:15:27.262121  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.262131  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:27.262152  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:27.262222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:27.298902  306360 cri.go:89] found id: ""
	I0407 14:15:27.298939  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.298951  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:27.298969  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:27.298988  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:27.338649  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:27.338676  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:27.388606  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:27.388653  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:27.403449  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:27.403491  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:27.469414  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:27.469448  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:27.469467  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:30.052698  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:30.071454  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:30.071529  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:30.104690  306360 cri.go:89] found id: ""
	I0407 14:15:30.104723  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.104733  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:30.104741  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:30.104805  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:30.139611  306360 cri.go:89] found id: ""
	I0407 14:15:30.139641  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.139651  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:30.139658  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:30.139724  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:30.173648  306360 cri.go:89] found id: ""
	I0407 14:15:30.173679  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.173691  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:30.173702  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:30.173766  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:30.207015  306360 cri.go:89] found id: ""
	I0407 14:15:30.207045  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.207055  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:30.207062  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:30.207141  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:30.242602  306360 cri.go:89] found id: ""
	I0407 14:15:30.242631  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.242642  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:30.242647  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:30.242698  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:30.275775  306360 cri.go:89] found id: ""
	I0407 14:15:30.275811  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.275824  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:30.275834  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:30.275906  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:30.310674  306360 cri.go:89] found id: ""
	I0407 14:15:30.310710  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.310722  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:30.310734  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:30.310803  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:30.342628  306360 cri.go:89] found id: ""
	I0407 14:15:30.342666  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.342677  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:30.342690  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:30.342704  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:30.390588  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:30.390625  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:30.405143  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:30.405179  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:30.473557  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:30.473590  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:30.473607  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:30.555915  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:30.555961  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:30.768801  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:30.769309  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:30.769368  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:30.769289  308884 retry.go:31] will retry after 1.473723354s: waiting for domain to come up
	I0407 14:15:32.244907  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:32.245389  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:32.245420  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:32.245345  308884 retry.go:31] will retry after 1.499915681s: waiting for domain to come up
	I0407 14:15:33.747106  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:33.747641  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:33.747664  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:33.747621  308884 retry.go:31] will retry after 1.755869329s: waiting for domain to come up
	I0407 14:15:35.505715  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:35.506189  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:35.506224  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:35.506149  308884 retry.go:31] will retry after 1.908921296s: waiting for domain to come up
	I0407 14:15:33.094714  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:33.107818  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:33.107883  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:33.147279  306360 cri.go:89] found id: ""
	I0407 14:15:33.147310  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.147317  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:33.147323  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:33.147374  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:33.182866  306360 cri.go:89] found id: ""
	I0407 14:15:33.182895  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.182903  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:33.182909  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:33.182962  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:33.219845  306360 cri.go:89] found id: ""
	I0407 14:15:33.219881  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.219894  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:33.219903  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:33.219980  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:33.255785  306360 cri.go:89] found id: ""
	I0407 14:15:33.255818  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.255832  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:33.255838  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:33.255888  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:33.296287  306360 cri.go:89] found id: ""
	I0407 14:15:33.296320  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.296331  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:33.296339  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:33.296406  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:33.333123  306360 cri.go:89] found id: ""
	I0407 14:15:33.333156  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.333167  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:33.333174  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:33.333244  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:33.367813  306360 cri.go:89] found id: ""
	I0407 14:15:33.367844  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.367855  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:33.367862  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:33.367930  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:33.401927  306360 cri.go:89] found id: ""
	I0407 14:15:33.401957  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.401964  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:33.401974  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:33.401985  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:33.464350  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:33.464390  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:33.478831  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:33.478866  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:33.554322  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:33.554352  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:33.554370  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:33.632339  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:33.632381  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:37.417168  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:37.417658  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:37.417734  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:37.417635  308884 retry.go:31] will retry after 3.116726133s: waiting for domain to come up
	I0407 14:15:40.537848  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:40.538357  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:40.538386  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:40.538314  308884 retry.go:31] will retry after 2.7485631s: waiting for domain to come up
	I0407 14:15:36.177635  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:36.191117  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:36.191215  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:36.229342  306360 cri.go:89] found id: ""
	I0407 14:15:36.229373  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.229384  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:36.229391  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:36.229461  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:36.269119  306360 cri.go:89] found id: ""
	I0407 14:15:36.269151  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.269162  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:36.269170  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:36.269236  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:36.312510  306360 cri.go:89] found id: ""
	I0407 14:15:36.312544  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.312556  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:36.312563  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:36.312632  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:36.346706  306360 cri.go:89] found id: ""
	I0407 14:15:36.346741  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.346753  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:36.346762  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:36.346830  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:36.382862  306360 cri.go:89] found id: ""
	I0407 14:15:36.382899  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.382912  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:36.382920  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:36.382989  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:36.424287  306360 cri.go:89] found id: ""
	I0407 14:15:36.424318  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.424329  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:36.424337  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:36.424407  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:36.473843  306360 cri.go:89] found id: ""
	I0407 14:15:36.473891  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.473906  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:36.473916  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:36.474002  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:36.532647  306360 cri.go:89] found id: ""
	I0407 14:15:36.532685  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.532697  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:36.532711  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:36.532727  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:36.599779  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:36.599820  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:36.614047  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:36.614082  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:36.692006  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:36.692030  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:36.692044  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:36.782142  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:36.782196  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:39.320544  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:39.333558  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:39.333630  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:39.367209  306360 cri.go:89] found id: ""
	I0407 14:15:39.367244  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.367255  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:39.367264  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:39.367338  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:39.406298  306360 cri.go:89] found id: ""
	I0407 14:15:39.406326  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.406335  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:39.406342  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:39.406407  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:39.440090  306360 cri.go:89] found id: ""
	I0407 14:15:39.440118  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.440128  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:39.440134  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:39.440197  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:39.473483  306360 cri.go:89] found id: ""
	I0407 14:15:39.473514  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.473527  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:39.473534  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:39.473602  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:39.505571  306360 cri.go:89] found id: ""
	I0407 14:15:39.505599  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.505607  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:39.505613  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:39.505676  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:39.538929  306360 cri.go:89] found id: ""
	I0407 14:15:39.538961  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.538971  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:39.538980  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:39.539045  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:39.572047  306360 cri.go:89] found id: ""
	I0407 14:15:39.572078  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.572089  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:39.572097  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:39.572163  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:39.605781  306360 cri.go:89] found id: ""
	I0407 14:15:39.605812  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.605854  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:39.605868  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:39.605885  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:39.684887  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:39.684931  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:39.725609  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:39.725639  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:39.776592  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:39.776634  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:39.792687  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:39.792719  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:39.859832  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:43.289843  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.290313  308831 main.go:141] libmachine: (newest-cni-541721) found domain IP: 192.168.39.230
	I0407 14:15:43.290342  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has current primary IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.290351  308831 main.go:141] libmachine: (newest-cni-541721) reserving static IP address...
	I0407 14:15:43.290797  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "newest-cni-541721", mac: "52:54:00:e6:36:ee", ip: "192.168.39.230"} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.290844  308831 main.go:141] libmachine: (newest-cni-541721) DBG | skip adding static IP to network mk-newest-cni-541721 - found existing host DHCP lease matching {name: "newest-cni-541721", mac: "52:54:00:e6:36:ee", ip: "192.168.39.230"}
	I0407 14:15:43.290861  308831 main.go:141] libmachine: (newest-cni-541721) reserved static IP address 192.168.39.230 for domain newest-cni-541721
	I0407 14:15:43.290877  308831 main.go:141] libmachine: (newest-cni-541721) waiting for SSH...
	I0407 14:15:43.290888  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Getting to WaitForSSH function...
	I0407 14:15:43.293128  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.293457  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.293482  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.293603  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Using SSH client type: external
	I0407 14:15:43.293630  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Using SSH private key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa (-rw-------)
	I0407 14:15:43.293658  308831 main.go:141] libmachine: (newest-cni-541721) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 14:15:43.293670  308831 main.go:141] libmachine: (newest-cni-541721) DBG | About to run SSH command:
	I0407 14:15:43.293684  308831 main.go:141] libmachine: (newest-cni-541721) DBG | exit 0
	I0407 14:15:43.420319  308831 main.go:141] libmachine: (newest-cni-541721) DBG | SSH cmd err, output: <nil>: 
	I0407 14:15:43.420721  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetConfigRaw
	I0407 14:15:43.421390  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetIP
	I0407 14:15:43.424495  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.424838  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.424863  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.425125  308831 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/config.json ...
	I0407 14:15:43.425347  308831 machine.go:93] provisionDockerMachine start ...
	I0407 14:15:43.425369  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:43.425612  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:43.428118  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.428491  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.428518  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.428670  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:43.428877  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.429081  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.429220  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:43.429407  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:43.429675  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:43.429686  308831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 14:15:43.536790  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 14:15:43.536829  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetMachineName
	I0407 14:15:43.537083  308831 buildroot.go:166] provisioning hostname "newest-cni-541721"
	I0407 14:15:43.537120  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetMachineName
	I0407 14:15:43.537329  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:43.540191  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.540559  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.540585  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.540732  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:43.540899  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.541132  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.541282  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:43.541478  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:43.541679  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:43.541692  308831 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-541721 && echo "newest-cni-541721" | sudo tee /etc/hostname
	I0407 14:15:43.663263  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-541721
	
	I0407 14:15:43.663296  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:43.665913  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.666215  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.666245  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.666389  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:43.666571  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.666726  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.666878  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:43.667008  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:43.667209  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:43.667223  308831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-541721' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-541721/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-541721' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:15:43.781703  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:15:43.781735  308831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20598-242355/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-242355/.minikube}
	I0407 14:15:43.781770  308831 buildroot.go:174] setting up certificates
	I0407 14:15:43.781781  308831 provision.go:84] configureAuth start
	I0407 14:15:43.781789  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetMachineName
	I0407 14:15:43.782098  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetIP
	I0407 14:15:43.784807  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.785138  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.785165  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.785310  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:43.787964  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.788465  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.788506  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.788684  308831 provision.go:143] copyHostCerts
	I0407 14:15:43.788737  308831 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem, removing ...
	I0407 14:15:43.788762  308831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem
	I0407 14:15:43.788828  308831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem (1679 bytes)
	I0407 14:15:43.788909  308831 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem, removing ...
	I0407 14:15:43.788917  308831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem
	I0407 14:15:43.788941  308831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem (1078 bytes)
	I0407 14:15:43.789008  308831 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem, removing ...
	I0407 14:15:43.789016  308831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem
	I0407 14:15:43.789045  308831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem (1123 bytes)
	I0407 14:15:43.789089  308831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem org=jenkins.newest-cni-541721 san=[127.0.0.1 192.168.39.230 localhost minikube newest-cni-541721]
	I0407 14:15:44.038906  308831 provision.go:177] copyRemoteCerts
	I0407 14:15:44.038972  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:15:44.038998  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.041517  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.041889  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.041921  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.042056  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.042296  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.042445  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.042564  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:44.126574  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0407 14:15:44.150348  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 14:15:44.173128  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:15:44.196028  308831 provision.go:87] duration metric: took 414.219253ms to configureAuth
	I0407 14:15:44.196057  308831 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:15:44.196256  308831 config.go:182] Loaded profile config "newest-cni-541721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:15:44.196365  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.198992  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.199332  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.199359  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.199473  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.199649  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.199841  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.199983  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.200187  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:44.200392  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:44.200406  308831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 14:15:44.425698  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 14:15:44.425730  308831 machine.go:96] duration metric: took 1.00036936s to provisionDockerMachine
	I0407 14:15:44.425742  308831 start.go:293] postStartSetup for "newest-cni-541721" (driver="kvm2")
	I0407 14:15:44.425753  308831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:15:44.425769  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.426237  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:15:44.426282  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.428748  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.429105  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.429137  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.429312  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.429508  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.429691  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.429839  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:44.514924  308831 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:15:44.519014  308831 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:15:44.519041  308831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/addons for local assets ...
	I0407 14:15:44.519105  308831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/files for local assets ...
	I0407 14:15:44.519203  308831 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem -> 2495162.pem in /etc/ssl/certs
	I0407 14:15:44.519338  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:15:44.528306  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:15:44.552208  308831 start.go:296] duration metric: took 126.448126ms for postStartSetup
	I0407 14:15:44.552258  308831 fix.go:56] duration metric: took 18.815446562s for fixHost
	I0407 14:15:44.552283  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.555012  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.555411  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.555436  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.555613  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.555777  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.555921  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.556086  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.556274  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:44.556581  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:44.556596  308831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 14:15:44.665315  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744035344.637882085
	
	I0407 14:15:44.665344  308831 fix.go:216] guest clock: 1744035344.637882085
	I0407 14:15:44.665352  308831 fix.go:229] Guest: 2025-04-07 14:15:44.637882085 +0000 UTC Remote: 2025-04-07 14:15:44.552262543 +0000 UTC m=+18.960633497 (delta=85.619542ms)
	I0407 14:15:44.665378  308831 fix.go:200] guest clock delta is within tolerance: 85.619542ms
	I0407 14:15:44.665385  308831 start.go:83] releasing machines lock for "newest-cni-541721", held for 18.928588169s
	I0407 14:15:44.665411  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.665665  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetIP
	I0407 14:15:44.668359  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.668769  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.668796  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.669001  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.669473  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.669663  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.669764  308831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 14:15:44.669821  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.669881  308831 ssh_runner.go:195] Run: cat /version.json
	I0407 14:15:44.669903  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.672537  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.672728  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.672882  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.672910  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.673079  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.673108  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.673126  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.673306  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.673329  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.673471  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.673479  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.673639  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.673629  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:44.673808  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:44.772603  308831 ssh_runner.go:195] Run: systemctl --version
	I0407 14:15:44.778824  308831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 14:15:44.927200  308831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 14:15:44.934229  308831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 14:15:44.934295  308831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 14:15:44.949862  308831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 14:15:44.949886  308831 start.go:495] detecting cgroup driver to use...
	I0407 14:15:44.949946  308831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 14:15:44.965426  308831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:15:44.978798  308831 docker.go:217] disabling cri-docker service (if available) ...
	I0407 14:15:44.978861  308831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 14:15:44.991899  308831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 14:15:45.004571  308831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 14:15:45.128809  308831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 14:15:45.285871  308831 docker.go:233] disabling docker service ...
	I0407 14:15:45.285943  308831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 14:15:45.300353  308831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 14:15:45.313521  308831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 14:15:45.446753  308831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 14:15:45.566017  308831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 14:15:45.581006  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 14:15:45.599340  308831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 14:15:45.599422  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.609965  308831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 14:15:45.610059  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.620860  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:42.361106  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:42.374378  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:42.374461  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:42.409267  306360 cri.go:89] found id: ""
	I0407 14:15:42.409296  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.409304  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:42.409309  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:42.409361  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:42.442512  306360 cri.go:89] found id: ""
	I0407 14:15:42.442540  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.442548  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:42.442554  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:42.442603  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:42.476016  306360 cri.go:89] found id: ""
	I0407 14:15:42.476044  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.476055  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:42.476063  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:42.476127  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:42.507103  306360 cri.go:89] found id: ""
	I0407 14:15:42.507138  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.507145  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:42.507151  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:42.507205  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:42.543140  306360 cri.go:89] found id: ""
	I0407 14:15:42.543167  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.543178  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:42.543185  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:42.543260  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:42.583718  306360 cri.go:89] found id: ""
	I0407 14:15:42.583749  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.583756  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:42.583764  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:42.583826  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:42.617614  306360 cri.go:89] found id: ""
	I0407 14:15:42.617649  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.617660  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:42.617668  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:42.617736  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:42.652193  306360 cri.go:89] found id: ""
	I0407 14:15:42.652220  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.652227  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:42.652237  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:42.652250  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:42.700778  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:42.700817  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:42.713926  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:42.713958  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:42.781552  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:42.781577  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:42.781590  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:42.857460  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:42.857502  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:45.397689  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:45.416022  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:45.416089  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:45.457038  306360 cri.go:89] found id: ""
	I0407 14:15:45.457078  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.457089  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:45.457097  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:45.457168  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:45.491527  306360 cri.go:89] found id: ""
	I0407 14:15:45.491559  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.491570  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:45.491578  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:45.491647  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:45.524296  306360 cri.go:89] found id: ""
	I0407 14:15:45.524333  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.524344  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:45.524352  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:45.524416  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:45.562418  306360 cri.go:89] found id: ""
	I0407 14:15:45.562450  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.562461  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:45.562469  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:45.562537  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:45.601384  306360 cri.go:89] found id: ""
	I0407 14:15:45.601409  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.601417  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:45.601423  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:45.601471  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:45.638899  306360 cri.go:89] found id: ""
	I0407 14:15:45.638924  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.638933  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:45.638939  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:45.639005  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:45.675994  306360 cri.go:89] found id: ""
	I0407 14:15:45.676031  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.676047  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:45.676064  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:45.676128  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:45.714599  306360 cri.go:89] found id: ""
	I0407 14:15:45.714626  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.714637  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:45.714648  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:45.714665  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:45.780477  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:45.780527  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:45.794822  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:45.794859  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:45.866895  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:45.866921  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:45.866944  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:45.631474  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.644263  308831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 14:15:45.658794  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.670123  308831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.689249  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.699508  308831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 14:15:45.709814  308831 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 14:15:45.709869  308831 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 14:15:45.723859  308831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 14:15:45.733593  308831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:15:45.849319  308831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 14:15:45.947041  308831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 14:15:45.947134  308831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 14:15:45.952013  308831 start.go:563] Will wait 60s for crictl version
	I0407 14:15:45.952094  308831 ssh_runner.go:195] Run: which crictl
	I0407 14:15:45.956063  308831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 14:15:46.003168  308831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 14:15:46.003266  308831 ssh_runner.go:195] Run: crio --version
	I0407 14:15:46.030604  308831 ssh_runner.go:195] Run: crio --version
	I0407 14:15:46.060415  308831 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 14:15:46.061532  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetIP
	I0407 14:15:46.064257  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:46.064649  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:46.064686  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:46.064942  308831 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0407 14:15:46.069108  308831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:15:46.082697  308831 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0407 14:15:46.083791  308831 kubeadm.go:883] updating cluster {Name:newest-cni-541721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-5
41721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 14:15:46.083896  308831 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 14:15:46.083950  308831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:15:46.117284  308831 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0407 14:15:46.117364  308831 ssh_runner.go:195] Run: which lz4
	I0407 14:15:46.121377  308831 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 14:15:46.125460  308831 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 14:15:46.125488  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0407 14:15:47.523799  308831 crio.go:462] duration metric: took 1.402446769s to copy over tarball
	I0407 14:15:47.523885  308831 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 14:15:49.780413  308831 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256487333s)
	I0407 14:15:49.780472  308831 crio.go:469] duration metric: took 2.256631266s to extract the tarball
	I0407 14:15:49.780484  308831 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 14:15:49.817617  308831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:15:49.861772  308831 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 14:15:49.861798  308831 cache_images.go:84] Images are preloaded, skipping loading
	I0407 14:15:49.861811  308831 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.32.2 crio true true} ...
	I0407 14:15:49.861914  308831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-541721 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-541721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 14:15:49.861982  308831 ssh_runner.go:195] Run: crio config
	I0407 14:15:49.906766  308831 cni.go:84] Creating CNI manager for ""
	I0407 14:15:49.906790  308831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:15:49.906799  308831 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0407 14:15:49.906821  308831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-541721 NodeName:newest-cni-541721 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 14:15:49.906963  308831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-541721"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.230"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 14:15:49.907028  308831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 14:15:49.917114  308831 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 14:15:49.917177  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 14:15:49.927296  308831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0407 14:15:49.945058  308831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 14:15:49.962171  308831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0407 14:15:49.981232  308831 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0407 14:15:49.985429  308831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:15:49.997919  308831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:15:50.112228  308831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:15:50.138008  308831 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721 for IP: 192.168.39.230
	I0407 14:15:50.138038  308831 certs.go:194] generating shared ca certs ...
	I0407 14:15:50.138056  308831 certs.go:226] acquiring lock for ca certs: {Name:mk1da0e2436b5b22d130d00c7c348c272ee34f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:15:50.138217  308831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key
	I0407 14:15:50.138257  308831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key
	I0407 14:15:50.138269  308831 certs.go:256] generating profile certs ...
	I0407 14:15:50.138383  308831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/client.key
	I0407 14:15:50.138463  308831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/apiserver.key.ae70fd14
	I0407 14:15:50.138512  308831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/proxy-client.key
	I0407 14:15:50.138669  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem (1338 bytes)
	W0407 14:15:50.138721  308831 certs.go:480] ignoring /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516_empty.pem, impossibly tiny 0 bytes
	I0407 14:15:50.138735  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 14:15:50.138774  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem (1078 bytes)
	I0407 14:15:50.138805  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem (1123 bytes)
	I0407 14:15:50.138835  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem (1679 bytes)
	I0407 14:15:50.138899  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:15:50.139675  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 14:15:50.197283  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 14:15:50.242193  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 14:15:50.269592  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 14:15:50.295620  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0407 14:15:50.326901  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 14:15:50.350149  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 14:15:50.373570  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 14:15:50.396967  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem --> /usr/share/ca-certificates/249516.pem (1338 bytes)
	I0407 14:15:50.419713  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /usr/share/ca-certificates/2495162.pem (1708 bytes)
	I0407 14:15:50.443345  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 14:15:50.466277  308831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 14:15:50.482772  308831 ssh_runner.go:195] Run: openssl version
	I0407 14:15:50.488692  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/249516.pem && ln -fs /usr/share/ca-certificates/249516.pem /etc/ssl/certs/249516.pem"
	I0407 14:15:50.499480  308831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/249516.pem
	I0407 14:15:50.504091  308831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 13:03 /usr/share/ca-certificates/249516.pem
	I0407 14:15:50.504182  308831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/249516.pem
	I0407 14:15:50.510343  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/249516.pem /etc/ssl/certs/51391683.0"
	I0407 14:15:50.521521  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2495162.pem && ln -fs /usr/share/ca-certificates/2495162.pem /etc/ssl/certs/2495162.pem"
	I0407 14:15:50.532621  308831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2495162.pem
	I0407 14:15:50.537354  308831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 13:03 /usr/share/ca-certificates/2495162.pem
	I0407 14:15:50.537410  308831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2495162.pem
	I0407 14:15:50.543022  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2495162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 14:15:50.554034  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 14:15:50.564979  308831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:15:50.569666  308831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:15:50.569727  308831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:15:50.575423  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 14:15:50.586213  308831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 14:15:50.590961  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 14:15:50.596887  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 14:15:50.602578  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 14:15:50.608528  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 14:15:50.614421  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 14:15:50.620333  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 14:15:50.626231  308831 kubeadm.go:392] StartCluster: {Name:newest-cni-541721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-5417
21 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:15:50.626391  308831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 14:15:50.626505  308831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 14:15:45.951585  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:45.951615  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:48.488815  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:48.507944  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:48.508026  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:48.551257  306360 cri.go:89] found id: ""
	I0407 14:15:48.551300  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.551314  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:48.551324  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:48.551402  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:48.595600  306360 cri.go:89] found id: ""
	I0407 14:15:48.595626  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.595634  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:48.595640  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:48.595704  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:48.639221  306360 cri.go:89] found id: ""
	I0407 14:15:48.639248  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.639255  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:48.639261  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:48.639326  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:48.680520  306360 cri.go:89] found id: ""
	I0407 14:15:48.680562  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.680575  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:48.680585  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:48.680679  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:48.728260  306360 cri.go:89] found id: ""
	I0407 14:15:48.728300  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.728315  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:48.728326  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:48.728410  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:48.773839  306360 cri.go:89] found id: ""
	I0407 14:15:48.773875  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.773886  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:48.773893  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:48.773955  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:48.814915  306360 cri.go:89] found id: ""
	I0407 14:15:48.814947  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.814957  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:48.814963  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:48.815028  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:48.860191  306360 cri.go:89] found id: ""
	I0407 14:15:48.860225  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.860245  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:48.860258  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:48.860273  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:48.922676  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:48.922714  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:48.939569  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:48.939618  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:49.016199  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:49.016225  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:49.016248  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:49.097968  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:49.098013  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:50.663771  308831 cri.go:89] found id: ""
	I0407 14:15:50.663873  308831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 14:15:50.674085  308831 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 14:15:50.674107  308831 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 14:15:50.674160  308831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 14:15:50.683827  308831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 14:15:50.684345  308831 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-541721" does not appear in /home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:15:50.684567  308831 kubeconfig.go:62] /home/jenkins/minikube-integration/20598-242355/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-541721" cluster setting kubeconfig missing "newest-cni-541721" context setting]
	I0407 14:15:50.684927  308831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/kubeconfig: {Name:mkef4208e7f217ec5ec7c15cd00232eac7047b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:15:50.686121  308831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 14:15:50.695269  308831 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.230
	I0407 14:15:50.695302  308831 kubeadm.go:1160] stopping kube-system containers ...
	I0407 14:15:50.695314  308831 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0407 14:15:50.695355  308831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 14:15:50.736911  308831 cri.go:89] found id: ""
	I0407 14:15:50.737008  308831 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0407 14:15:50.753425  308831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:15:50.765206  308831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:15:50.765225  308831 kubeadm.go:157] found existing configuration files:
	
	I0407 14:15:50.765267  308831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:15:50.774388  308831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:15:50.774441  308831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:15:50.783710  308831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:15:50.792577  308831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:15:50.792633  308831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:15:50.802813  308831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:15:50.811735  308831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:15:50.811788  308831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:15:50.820555  308831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:15:50.829705  308831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:15:50.829752  308831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:15:50.839810  308831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 14:15:50.849133  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:50.964318  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:52.072919  308831 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.108554265s)
	I0407 14:15:52.072960  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:52.328909  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:52.421835  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:52.499558  308831 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:15:52.499668  308831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:53.000158  308831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:53.500670  308831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:53.520865  308831 api_server.go:72] duration metric: took 1.021307622s to wait for apiserver process to appear ...
	I0407 14:15:53.520900  308831 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:15:53.520929  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:51.641164  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:51.655473  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:51.655548  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:51.690008  306360 cri.go:89] found id: ""
	I0407 14:15:51.690036  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.690047  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:51.690055  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:51.690118  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:51.728115  306360 cri.go:89] found id: ""
	I0407 14:15:51.728141  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.728150  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:51.728157  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:51.728222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:51.764117  306360 cri.go:89] found id: ""
	I0407 14:15:51.764156  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.764168  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:51.764180  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:51.764243  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:51.801243  306360 cri.go:89] found id: ""
	I0407 14:15:51.801279  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.801291  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:51.801299  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:51.801363  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:51.838262  306360 cri.go:89] found id: ""
	I0407 14:15:51.838292  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.838302  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:51.838310  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:51.838378  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:51.880251  306360 cri.go:89] found id: ""
	I0407 14:15:51.880284  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.880294  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:51.880302  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:51.880373  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:51.922175  306360 cri.go:89] found id: ""
	I0407 14:15:51.922203  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.922213  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:51.922220  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:51.922291  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:51.963932  306360 cri.go:89] found id: ""
	I0407 14:15:51.963960  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.963970  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:51.963985  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:51.964000  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:52.046274  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:52.046322  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:52.093979  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:52.094019  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:52.148613  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:52.148660  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:52.162525  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:52.162559  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:52.239788  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:54.740063  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:54.757191  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:54.757267  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:54.789524  306360 cri.go:89] found id: ""
	I0407 14:15:54.789564  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.789575  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:54.789584  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:54.789646  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:54.823746  306360 cri.go:89] found id: ""
	I0407 14:15:54.823785  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.823797  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:54.823805  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:54.823875  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:54.861371  306360 cri.go:89] found id: ""
	I0407 14:15:54.861406  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.861417  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:54.861424  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:54.861486  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:54.896286  306360 cri.go:89] found id: ""
	I0407 14:15:54.896318  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.896327  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:54.896334  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:54.896402  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:54.938594  306360 cri.go:89] found id: ""
	I0407 14:15:54.938632  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.938643  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:54.938651  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:54.938722  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:54.971701  306360 cri.go:89] found id: ""
	I0407 14:15:54.971737  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.971745  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:54.971751  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:54.971809  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:55.008651  306360 cri.go:89] found id: ""
	I0407 14:15:55.008682  306360 logs.go:282] 0 containers: []
	W0407 14:15:55.008693  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:55.008700  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:55.008768  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:55.043829  306360 cri.go:89] found id: ""
	I0407 14:15:55.043860  306360 logs.go:282] 0 containers: []
	W0407 14:15:55.043868  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:55.043879  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:55.043899  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:55.094682  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:55.094720  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:55.109798  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:55.109855  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:55.187514  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:55.187540  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:55.187555  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:55.273313  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:55.273360  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:56.021402  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 14:15:56.021428  308831 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 14:15:56.021442  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:56.066617  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 14:15:56.066650  308831 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 14:15:56.521245  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:56.526043  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:15:56.526070  308831 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:15:57.021581  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:57.026339  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:15:57.026365  308831 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:15:57.521022  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:57.525667  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I0407 14:15:57.532348  308831 api_server.go:141] control plane version: v1.32.2
	I0407 14:15:57.532377  308831 api_server.go:131] duration metric: took 4.011467673s to wait for apiserver health ...
	I0407 14:15:57.532391  308831 cni.go:84] Creating CNI manager for ""
	I0407 14:15:57.532400  308831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:15:57.534300  308831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 14:15:57.535520  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 14:15:57.547844  308831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 14:15:57.567595  308831 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:15:57.571906  308831 system_pods.go:59] 8 kube-system pods found
	I0407 14:15:57.571945  308831 system_pods.go:61] "coredns-668d6bf9bc-kwfnj" [c312b7f9-1687-4be6-ad08-27dca9ba736f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0407 14:15:57.571953  308831 system_pods.go:61] "etcd-newest-cni-541721" [42628491-612b-4295-88bb-07ac9eb7ab9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0407 14:15:57.571961  308831 system_pods.go:61] "kube-apiserver-newest-cni-541721" [07768ac0-2f44-4b96-bfe5-acfb91362045] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0407 14:15:57.571967  308831 system_pods.go:61] "kube-controller-manager-newest-cni-541721" [83a4f8c5-c745-47a9-9cc6-2456566c28a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 14:15:57.571978  308831 system_pods.go:61] "kube-proxy-crp62" [47febbe3-a277-4779-aee8-ba1c5433f21d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0407 14:15:57.571986  308831 system_pods.go:61] "kube-scheduler-newest-cni-541721" [5b4ee840-ac6a-4214-9179-5e6d5af9f764] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0407 14:15:57.571991  308831 system_pods.go:61] "metrics-server-f79f97bbb-kc7kt" [2484cb12-61a6-4de3-8dd6-bfcb4dcb5baa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 14:15:57.571999  308831 system_pods.go:61] "storage-provisioner" [e41f18c2-1442-463f-ae4b-bc47b254aa7a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0407 14:15:57.572004  308831 system_pods.go:74] duration metric: took 4.389672ms to wait for pod list to return data ...
	I0407 14:15:57.572014  308831 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:15:57.575009  308831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:15:57.575029  308831 node_conditions.go:123] node cpu capacity is 2
	I0407 14:15:57.575040  308831 node_conditions.go:105] duration metric: took 3.021612ms to run NodePressure ...
	I0407 14:15:57.575056  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:57.880816  308831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 14:15:57.894579  308831 ops.go:34] apiserver oom_adj: -16
	I0407 14:15:57.894607  308831 kubeadm.go:597] duration metric: took 7.220492712s to restartPrimaryControlPlane
	I0407 14:15:57.894619  308831 kubeadm.go:394] duration metric: took 7.268398637s to StartCluster
	I0407 14:15:57.894641  308831 settings.go:142] acquiring lock: {Name:mk4f0a46db7c57f47f856bd845390df879e08200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:15:57.894822  308831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:15:57.896037  308831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/kubeconfig: {Name:mkef4208e7f217ec5ec7c15cd00232eac7047b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:15:57.896384  308831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 14:15:57.896474  308831 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 14:15:57.896568  308831 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-541721"
	I0407 14:15:57.896589  308831 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-541721"
	W0407 14:15:57.896596  308831 addons.go:247] addon storage-provisioner should already be in state true
	I0407 14:15:57.896613  308831 addons.go:69] Setting default-storageclass=true in profile "newest-cni-541721"
	I0407 14:15:57.896638  308831 host.go:66] Checking if "newest-cni-541721" exists ...
	I0407 14:15:57.896625  308831 addons.go:69] Setting dashboard=true in profile "newest-cni-541721"
	I0407 14:15:57.896642  308831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-541721"
	I0407 14:15:57.896665  308831 addons.go:238] Setting addon dashboard=true in "newest-cni-541721"
	W0407 14:15:57.896675  308831 addons.go:247] addon dashboard should already be in state true
	I0407 14:15:57.896682  308831 addons.go:69] Setting metrics-server=true in profile "newest-cni-541721"
	I0407 14:15:57.896709  308831 host.go:66] Checking if "newest-cni-541721" exists ...
	I0407 14:15:57.896720  308831 addons.go:238] Setting addon metrics-server=true in "newest-cni-541721"
	W0407 14:15:57.896730  308831 addons.go:247] addon metrics-server should already be in state true
	I0407 14:15:57.896761  308831 host.go:66] Checking if "newest-cni-541721" exists ...
	I0407 14:15:57.897130  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.897144  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.897129  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.897179  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.897224  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.897170  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.897247  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.897289  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.897439  308831 config.go:182] Loaded profile config "newest-cni-541721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:15:57.898160  308831 out.go:177] * Verifying Kubernetes components...
	I0407 14:15:57.899427  308831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:15:57.914645  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34255
	I0407 14:15:57.914658  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I0407 14:15:57.915088  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.915221  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.915772  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.915789  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.915919  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.915929  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.916179  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.916232  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.916344  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.916804  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.916846  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.917048  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0407 14:15:57.917542  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.918163  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.918178  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.918569  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.919092  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.919123  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.919192  308831 addons.go:238] Setting addon default-storageclass=true in "newest-cni-541721"
	W0407 14:15:57.919205  308831 addons.go:247] addon default-storageclass should already be in state true
	I0407 14:15:57.919233  308831 host.go:66] Checking if "newest-cni-541721" exists ...
	I0407 14:15:57.919576  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.919605  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.920769  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0407 14:15:57.921236  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.921729  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.921752  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.922088  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.922572  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.922608  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.937572  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0407 14:15:57.937695  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42421
	I0407 14:15:57.938194  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.938660  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34627
	I0407 14:15:57.938863  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.938887  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.938963  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.939251  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.939620  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0407 14:15:57.939642  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.939848  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.939900  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.940021  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.940071  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.940086  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.940288  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.940312  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.940532  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.940651  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.940673  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.940694  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.940997  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.941226  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.941293  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.941418  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.943066  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:57.943556  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:57.944233  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:57.945868  308831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:15:57.945873  308831 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0407 14:15:57.945925  308831 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0407 14:15:57.947501  308831 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 14:15:57.947525  308831 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 14:15:57.947549  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:57.947592  308831 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 14:15:57.947606  308831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 14:15:57.947682  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:57.949194  308831 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0407 14:15:57.950596  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0407 14:15:57.950612  308831 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0407 14:15:57.950633  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:57.951106  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.951518  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:57.951536  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.951608  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.951691  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:57.951866  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:57.952012  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:57.952224  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:57.952336  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:57.952370  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.952455  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:57.952697  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:57.952854  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:57.952995  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:57.954108  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.954455  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:57.954482  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.954659  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:57.954827  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:57.954967  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:57.955093  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:57.975194  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0407 14:15:57.975616  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.976107  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.976139  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.976544  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.976751  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.978595  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:57.978824  308831 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 14:15:57.978842  308831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 14:15:57.978862  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:57.982043  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.982380  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:57.982410  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.982678  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:57.982840  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:57.982966  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:57.983081  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:58.102404  308831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:15:58.120015  308831 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:15:58.120102  308831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:58.135300  308831 api_server.go:72] duration metric: took 238.836482ms to wait for apiserver process to appear ...
	I0407 14:15:58.135329  308831 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:15:58.135349  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:58.141206  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I0407 14:15:58.142587  308831 api_server.go:141] control plane version: v1.32.2
	I0407 14:15:58.142606  308831 api_server.go:131] duration metric: took 7.270895ms to wait for apiserver health ...
	I0407 14:15:58.142614  308831 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:15:58.146900  308831 system_pods.go:59] 8 kube-system pods found
	I0407 14:15:58.146926  308831 system_pods.go:61] "coredns-668d6bf9bc-kwfnj" [c312b7f9-1687-4be6-ad08-27dca9ba736f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0407 14:15:58.146935  308831 system_pods.go:61] "etcd-newest-cni-541721" [42628491-612b-4295-88bb-07ac9eb7ab9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0407 14:15:58.146943  308831 system_pods.go:61] "kube-apiserver-newest-cni-541721" [07768ac0-2f44-4b96-bfe5-acfb91362045] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0407 14:15:58.146948  308831 system_pods.go:61] "kube-controller-manager-newest-cni-541721" [83a4f8c5-c745-47a9-9cc6-2456566c28a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 14:15:58.146955  308831 system_pods.go:61] "kube-proxy-crp62" [47febbe3-a277-4779-aee8-ba1c5433f21d] Running
	I0407 14:15:58.146961  308831 system_pods.go:61] "kube-scheduler-newest-cni-541721" [5b4ee840-ac6a-4214-9179-5e6d5af9f764] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0407 14:15:58.146966  308831 system_pods.go:61] "metrics-server-f79f97bbb-kc7kt" [2484cb12-61a6-4de3-8dd6-bfcb4dcb5baa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 14:15:58.146972  308831 system_pods.go:61] "storage-provisioner" [e41f18c2-1442-463f-ae4b-bc47b254aa7a] Running
	I0407 14:15:58.146978  308831 system_pods.go:74] duration metric: took 4.358597ms to wait for pod list to return data ...
	I0407 14:15:58.146986  308831 default_sa.go:34] waiting for default service account to be created ...
	I0407 14:15:58.150282  308831 default_sa.go:45] found service account: "default"
	I0407 14:15:58.150299  308831 default_sa.go:55] duration metric: took 3.303841ms for default service account to be created ...
	I0407 14:15:58.150309  308831 kubeadm.go:582] duration metric: took 253.863257ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0407 14:15:58.150322  308831 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:15:58.153173  308831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:15:58.153197  308831 node_conditions.go:123] node cpu capacity is 2
	I0407 14:15:58.153211  308831 node_conditions.go:105] duration metric: took 2.884813ms to run NodePressure ...
	I0407 14:15:58.153224  308831 start.go:241] waiting for startup goroutines ...
	I0407 14:15:58.193220  308831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 14:15:58.219746  308831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 14:15:58.279762  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0407 14:15:58.279792  308831 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0407 14:15:58.310829  308831 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 14:15:58.310854  308831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0407 14:15:58.365195  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0407 14:15:58.365223  308831 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0407 14:15:58.418268  308831 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 14:15:58.418311  308831 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 14:15:58.452087  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0407 14:15:58.452125  308831 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0407 14:15:58.472397  308831 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 14:15:58.472435  308831 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 14:15:58.493767  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0407 14:15:58.493792  308831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0407 14:15:58.538632  308831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 14:15:58.591626  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0407 14:15:58.591661  308831 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0407 14:15:58.674454  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0407 14:15:58.674490  308831 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0407 14:15:58.705316  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0407 14:15:58.705355  308831 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0407 14:15:58.728819  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0407 14:15:58.728849  308831 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0407 14:15:58.748297  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 14:15:58.748328  308831 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0407 14:15:58.771377  308831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 14:15:59.673041  308831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.453258343s)
	I0407 14:15:59.673107  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.673119  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.673482  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.673507  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.673518  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.673527  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.673768  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.673788  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.673805  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:15:59.674036  308831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.480774359s)
	I0407 14:15:59.674082  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.674098  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.674344  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.674361  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.674372  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.674387  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.674683  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.674696  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.674710  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:15:59.695131  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.695152  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.695501  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.695523  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.695537  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:16:00.090200  308831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.55151201s)
	I0407 14:16:00.090258  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:16:00.090283  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:16:00.090628  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:16:00.090645  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:16:00.090662  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:16:00.090672  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:16:00.090678  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:16:00.090980  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:16:00.090989  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:16:00.090997  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:16:00.091007  308831 addons.go:479] Verifying addon metrics-server=true in "newest-cni-541721"
	I0407 14:16:00.245449  308831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.473999327s)
	I0407 14:16:00.245510  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:16:00.245527  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:16:00.245797  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:16:00.245858  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:16:00.245882  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:16:00.245894  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:16:00.245895  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:16:00.246148  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:16:00.246165  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:16:00.247614  308831 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-541721 addons enable metrics-server
	
	I0407 14:16:00.248959  308831 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0407 14:16:00.250078  308831 addons.go:514] duration metric: took 2.353612079s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0407 14:16:00.250126  308831 start.go:246] waiting for cluster config update ...
	I0407 14:16:00.250153  308831 start.go:255] writing updated cluster config ...
	I0407 14:16:00.250500  308831 ssh_runner.go:195] Run: rm -f paused
	I0407 14:16:00.299045  308831 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 14:16:00.300679  308831 out.go:177] * Done! kubectl is now configured to use "newest-cni-541721" cluster and "default" namespace by default
	I0407 14:15:57.811712  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:57.825529  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:57.825597  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:57.863098  306360 cri.go:89] found id: ""
	I0407 14:15:57.863139  306360 logs.go:282] 0 containers: []
	W0407 14:15:57.863152  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:57.863160  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:57.863231  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:57.902011  306360 cri.go:89] found id: ""
	I0407 14:15:57.902049  306360 logs.go:282] 0 containers: []
	W0407 14:15:57.902059  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:57.902067  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:57.902134  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:57.965448  306360 cri.go:89] found id: ""
	I0407 14:15:57.965475  306360 logs.go:282] 0 containers: []
	W0407 14:15:57.965485  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:57.965492  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:57.965554  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:58.012478  306360 cri.go:89] found id: ""
	I0407 14:15:58.012508  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.012519  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:58.012528  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:58.012591  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:58.046324  306360 cri.go:89] found id: ""
	I0407 14:15:58.046352  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.046359  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:58.046365  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:58.046416  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:58.082655  306360 cri.go:89] found id: ""
	I0407 14:15:58.082690  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.082701  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:58.082771  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:58.082845  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:58.117888  306360 cri.go:89] found id: ""
	I0407 14:15:58.117917  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.117929  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:58.117936  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:58.118002  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:58.158074  306360 cri.go:89] found id: ""
	I0407 14:15:58.158100  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.158110  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:58.158122  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:58.158140  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:58.250799  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:58.250823  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:58.250839  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:58.331250  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:58.331289  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:58.373589  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:58.373616  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:58.441487  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:58.441523  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:00.956209  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:00.969519  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:00.969597  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:01.006091  306360 cri.go:89] found id: ""
	I0407 14:16:01.006123  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.006134  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:01.006142  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:01.006208  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:01.040220  306360 cri.go:89] found id: ""
	I0407 14:16:01.040251  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.040262  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:01.040271  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:01.040341  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:01.075777  306360 cri.go:89] found id: ""
	I0407 14:16:01.075813  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.075824  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:01.075829  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:01.075904  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:01.113161  306360 cri.go:89] found id: ""
	I0407 14:16:01.113188  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.113196  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:01.113202  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:01.113264  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:01.145743  306360 cri.go:89] found id: ""
	I0407 14:16:01.145781  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.145793  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:01.145800  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:01.145891  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:01.180531  306360 cri.go:89] found id: ""
	I0407 14:16:01.180564  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.180576  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:01.180585  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:01.180651  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:01.219646  306360 cri.go:89] found id: ""
	I0407 14:16:01.219679  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.219691  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:01.219699  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:01.219765  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:01.262312  306360 cri.go:89] found id: ""
	I0407 14:16:01.262345  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.262352  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:01.262363  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:01.262377  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:01.339749  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:01.339783  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:01.382985  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:01.383022  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:01.434889  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:01.434921  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:01.451353  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:01.451378  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:01.532064  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:04.032625  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:04.045945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:04.046004  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:04.079093  306360 cri.go:89] found id: ""
	I0407 14:16:04.079123  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.079134  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:04.079143  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:04.079206  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:04.114148  306360 cri.go:89] found id: ""
	I0407 14:16:04.114181  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.114192  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:04.114200  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:04.114270  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:04.152718  306360 cri.go:89] found id: ""
	I0407 14:16:04.152747  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.152758  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:04.152766  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:04.152841  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:04.190031  306360 cri.go:89] found id: ""
	I0407 14:16:04.190065  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.190077  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:04.190085  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:04.190163  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:04.227623  306360 cri.go:89] found id: ""
	I0407 14:16:04.227660  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.227671  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:04.227679  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:04.227747  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:04.268005  306360 cri.go:89] found id: ""
	I0407 14:16:04.268035  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.268047  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:04.268055  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:04.268125  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:04.304340  306360 cri.go:89] found id: ""
	I0407 14:16:04.304364  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.304374  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:04.304381  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:04.304456  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:04.341425  306360 cri.go:89] found id: ""
	I0407 14:16:04.341490  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.341502  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:04.341513  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:04.341526  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:04.398148  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:04.398179  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:04.414586  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:04.414612  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:04.482621  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:04.482650  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:04.482669  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:04.556315  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:04.556359  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:07.115968  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:07.129613  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:07.129672  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:07.167142  306360 cri.go:89] found id: ""
	I0407 14:16:07.167170  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.167180  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:07.167187  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:07.167246  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:07.198691  306360 cri.go:89] found id: ""
	I0407 14:16:07.198723  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.198730  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:07.198736  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:07.198790  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:07.231226  306360 cri.go:89] found id: ""
	I0407 14:16:07.231259  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.231268  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:07.231274  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:07.231326  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:07.263714  306360 cri.go:89] found id: ""
	I0407 14:16:07.263746  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.263757  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:07.263765  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:07.263828  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:07.301046  306360 cri.go:89] found id: ""
	I0407 14:16:07.301079  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.301090  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:07.301098  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:07.301189  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:07.333910  306360 cri.go:89] found id: ""
	I0407 14:16:07.333938  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.333948  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:07.333956  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:07.334023  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:07.366899  306360 cri.go:89] found id: ""
	I0407 14:16:07.366927  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.366937  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:07.366945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:07.367014  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:07.398845  306360 cri.go:89] found id: ""
	I0407 14:16:07.398878  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.398887  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:07.398899  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:07.398912  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:07.411632  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:07.411663  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:07.478836  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:07.478865  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:07.478883  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:07.557802  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:07.557852  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:07.602752  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:07.602785  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:10.155705  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:10.169146  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:10.169232  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:10.202657  306360 cri.go:89] found id: ""
	I0407 14:16:10.202694  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.202702  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:10.202708  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:10.202761  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:10.238239  306360 cri.go:89] found id: ""
	I0407 14:16:10.238272  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.238284  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:10.238292  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:10.238363  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:10.270804  306360 cri.go:89] found id: ""
	I0407 14:16:10.270833  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.270840  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:10.270847  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:10.270897  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:10.319453  306360 cri.go:89] found id: ""
	I0407 14:16:10.319491  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.319502  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:10.319510  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:10.319581  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:10.352622  306360 cri.go:89] found id: ""
	I0407 14:16:10.352654  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.352663  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:10.352670  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:10.352741  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:10.385869  306360 cri.go:89] found id: ""
	I0407 14:16:10.385897  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.385906  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:10.385912  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:10.385979  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:10.420689  306360 cri.go:89] found id: ""
	I0407 14:16:10.420715  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.420724  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:10.420729  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:10.420786  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:10.454182  306360 cri.go:89] found id: ""
	I0407 14:16:10.454210  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.454226  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:10.454238  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:10.454258  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:10.467987  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:10.468021  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:10.535621  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:10.535650  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:10.535663  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:10.613921  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:10.613963  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:10.663267  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:10.663299  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:13.220167  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:13.234197  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:13.234271  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:13.273116  306360 cri.go:89] found id: ""
	I0407 14:16:13.273159  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.273174  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:13.273180  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:13.273236  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:13.309984  306360 cri.go:89] found id: ""
	I0407 14:16:13.310024  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.310036  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:13.310044  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:13.310110  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:13.343107  306360 cri.go:89] found id: ""
	I0407 14:16:13.343145  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.343156  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:13.343162  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:13.343226  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:13.375826  306360 cri.go:89] found id: ""
	I0407 14:16:13.375857  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.375865  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:13.375871  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:13.375934  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:13.408895  306360 cri.go:89] found id: ""
	I0407 14:16:13.408930  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.408940  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:13.408945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:13.409002  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:13.442272  306360 cri.go:89] found id: ""
	I0407 14:16:13.442309  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.442319  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:13.442329  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:13.442395  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:13.478556  306360 cri.go:89] found id: ""
	I0407 14:16:13.478592  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.478600  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:13.478606  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:13.478671  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:13.512229  306360 cri.go:89] found id: ""
	I0407 14:16:13.512264  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.512274  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:13.512287  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:13.512304  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:13.561858  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:13.561899  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:13.575518  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:13.575549  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:13.638490  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:13.638515  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:13.638528  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:13.714178  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:13.714219  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:16.252354  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:16.265849  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:16.265939  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:16.298742  306360 cri.go:89] found id: ""
	I0407 14:16:16.298774  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.298781  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:16.298788  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:16.298844  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:16.332441  306360 cri.go:89] found id: ""
	I0407 14:16:16.332476  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.332487  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:16.332496  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:16.332563  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:16.365820  306360 cri.go:89] found id: ""
	I0407 14:16:16.365857  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.365868  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:16.365880  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:16.365972  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:16.399094  306360 cri.go:89] found id: ""
	I0407 14:16:16.399125  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.399134  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:16.399140  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:16.399193  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:16.433322  306360 cri.go:89] found id: ""
	I0407 14:16:16.433356  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.433364  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:16.433372  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:16.433428  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:16.466435  306360 cri.go:89] found id: ""
	I0407 14:16:16.466466  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.466476  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:16.466484  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:16.466551  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:16.498858  306360 cri.go:89] found id: ""
	I0407 14:16:16.498887  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.498895  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:16.498900  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:16.498952  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:16.531126  306360 cri.go:89] found id: ""
	I0407 14:16:16.531166  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.531177  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:16.531192  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:16.531206  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:16.610817  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:16.610857  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:16.650145  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:16.650180  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:16.699735  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:16.699821  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:16.719603  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:16.719634  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:16.813399  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:19.315126  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:19.327908  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:19.327993  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:19.361834  306360 cri.go:89] found id: ""
	I0407 14:16:19.361868  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.361877  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:19.361883  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:19.361947  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:19.396519  306360 cri.go:89] found id: ""
	I0407 14:16:19.396554  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.396565  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:19.396573  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:19.396645  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:19.431627  306360 cri.go:89] found id: ""
	I0407 14:16:19.431656  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.431665  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:19.431671  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:19.431741  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:19.465284  306360 cri.go:89] found id: ""
	I0407 14:16:19.465315  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.465323  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:19.465332  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:19.465393  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:19.497940  306360 cri.go:89] found id: ""
	I0407 14:16:19.497970  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.497984  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:19.497991  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:19.498060  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:19.533336  306360 cri.go:89] found id: ""
	I0407 14:16:19.533376  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.533389  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:19.533398  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:19.533469  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:19.568026  306360 cri.go:89] found id: ""
	I0407 14:16:19.568059  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.568076  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:19.568084  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:19.568153  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:19.601780  306360 cri.go:89] found id: ""
	I0407 14:16:19.601835  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.601844  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:19.601854  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:19.601865  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:19.642543  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:19.642574  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:19.692073  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:19.692119  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:19.705748  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:19.705783  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:19.772531  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:19.772556  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:19.772577  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:22.351857  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:22.365447  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:22.365514  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:22.403999  306360 cri.go:89] found id: ""
	I0407 14:16:22.404028  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.404036  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:22.404043  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:22.404094  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:22.441384  306360 cri.go:89] found id: ""
	I0407 14:16:22.441417  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.441426  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:22.441432  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:22.441487  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:22.490577  306360 cri.go:89] found id: ""
	I0407 14:16:22.490610  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.490621  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:22.490628  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:22.490714  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:22.537991  306360 cri.go:89] found id: ""
	I0407 14:16:22.538028  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.538040  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:22.538049  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:22.538120  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:22.584777  306360 cri.go:89] found id: ""
	I0407 14:16:22.584812  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.584824  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:22.584832  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:22.584920  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:22.627558  306360 cri.go:89] found id: ""
	I0407 14:16:22.627588  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.627596  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:22.627602  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:22.627665  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:22.664048  306360 cri.go:89] found id: ""
	I0407 14:16:22.664080  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.664089  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:22.664125  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:22.664180  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:22.697281  306360 cri.go:89] found id: ""
	I0407 14:16:22.697318  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.697329  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:22.697345  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:22.697360  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:22.750380  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:22.750418  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:22.764135  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:22.764163  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:22.830720  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:22.830756  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:22.830775  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:22.910687  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:22.910728  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:25.452699  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:25.466127  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:25.466217  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:25.503288  306360 cri.go:89] found id: ""
	I0407 14:16:25.503320  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.503329  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:25.503335  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:25.503395  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:25.535855  306360 cri.go:89] found id: ""
	I0407 14:16:25.535891  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.535900  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:25.535907  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:25.535969  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:25.569103  306360 cri.go:89] found id: ""
	I0407 14:16:25.569135  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.569143  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:25.569149  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:25.569201  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:25.604482  306360 cri.go:89] found id: ""
	I0407 14:16:25.604521  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.604533  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:25.604542  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:25.604600  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:25.638915  306360 cri.go:89] found id: ""
	I0407 14:16:25.638948  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.638958  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:25.638966  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:25.639042  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:25.673087  306360 cri.go:89] found id: ""
	I0407 14:16:25.673122  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.673134  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:25.673141  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:25.673211  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:25.706454  306360 cri.go:89] found id: ""
	I0407 14:16:25.706490  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.706502  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:25.706511  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:25.706596  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:25.739824  306360 cri.go:89] found id: ""
	I0407 14:16:25.739861  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.739872  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:25.739885  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:25.739900  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:25.818002  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:25.818045  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:25.866681  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:25.866715  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:25.920791  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:25.920824  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:25.934838  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:25.934870  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:26.005417  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:28.507450  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:28.526968  306360 kubeadm.go:597] duration metric: took 4m4.425341549s to restartPrimaryControlPlane
	W0407 14:16:28.527068  306360 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0407 14:16:28.527097  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 14:16:33.604963  306360 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.077840903s)
	I0407 14:16:33.605045  306360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:16:33.619392  306360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 14:16:33.629694  306360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:16:33.639997  306360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:16:33.640021  306360 kubeadm.go:157] found existing configuration files:
	
	I0407 14:16:33.640070  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:16:33.648891  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:16:33.648942  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:16:33.657964  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:16:33.666862  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:16:33.666907  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:16:33.675917  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:16:33.684806  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:16:33.684865  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:16:33.694385  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:16:33.703347  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:16:33.703399  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:16:33.712413  306360 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 14:16:33.785507  306360 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 14:16:33.785591  306360 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 14:16:33.919661  306360 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 14:16:33.919797  306360 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 14:16:33.919913  306360 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 14:16:34.088006  306360 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:16:34.090058  306360 out.go:235]   - Generating certificates and keys ...
	I0407 14:16:34.090179  306360 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 14:16:34.090273  306360 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 14:16:34.090394  306360 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 14:16:34.090467  306360 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 14:16:34.090559  306360 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 14:16:34.090629  306360 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 14:16:34.090692  306360 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 14:16:34.090745  306360 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 14:16:34.090963  306360 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 14:16:34.091371  306360 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 14:16:34.091513  306360 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 14:16:34.091573  306360 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:16:34.250084  306360 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:16:34.456551  306360 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:16:34.600069  306360 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:16:34.730872  306360 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:16:34.745839  306360 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:16:34.748203  306360 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:16:34.748481  306360 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 14:16:34.899583  306360 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:16:34.901383  306360 out.go:235]   - Booting up control plane ...
	I0407 14:16:34.901512  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:16:34.910634  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:16:34.913019  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:16:34.913965  306360 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:16:34.916441  306360 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 14:17:14.918244  306360 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 14:17:14.918361  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:14.918550  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:17:19.918793  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:19.919063  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:17:29.919626  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:29.919857  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:17:49.920620  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:49.920914  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:18:29.922713  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:18:29.922989  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:18:29.923024  306360 kubeadm.go:310] 
	I0407 14:18:29.923100  306360 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 14:18:29.923192  306360 kubeadm.go:310] 		timed out waiting for the condition
	I0407 14:18:29.923212  306360 kubeadm.go:310] 
	I0407 14:18:29.923266  306360 kubeadm.go:310] 	This error is likely caused by:
	I0407 14:18:29.923310  306360 kubeadm.go:310] 		- The kubelet is not running
	I0407 14:18:29.923461  306360 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 14:18:29.923472  306360 kubeadm.go:310] 
	I0407 14:18:29.923695  306360 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 14:18:29.923740  306360 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 14:18:29.923826  306360 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 14:18:29.923853  306360 kubeadm.go:310] 
	I0407 14:18:29.924004  306360 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 14:18:29.924126  306360 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 14:18:29.924136  306360 kubeadm.go:310] 
	I0407 14:18:29.924282  306360 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 14:18:29.924392  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 14:18:29.924528  306360 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 14:18:29.924627  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 14:18:29.924654  306360 kubeadm.go:310] 
	I0407 14:18:29.924807  306360 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:18:29.924945  306360 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 14:18:29.925037  306360 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0407 14:18:29.925275  306360 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0407 14:18:29.925332  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 14:18:35.351481  306360 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.426121458s)
	I0407 14:18:35.351559  306360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:18:35.365827  306360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:18:35.376549  306360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:18:35.376577  306360 kubeadm.go:157] found existing configuration files:
	
	I0407 14:18:35.376637  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:18:35.386629  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:18:35.386696  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:18:35.397247  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:18:35.406945  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:18:35.407018  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:18:35.416924  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:18:35.426596  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:18:35.426665  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:18:35.436695  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:18:35.446316  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:18:35.446368  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:18:35.455990  306360 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 14:18:35.529786  306360 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 14:18:35.529882  306360 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 14:18:35.669860  306360 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 14:18:35.670044  306360 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 14:18:35.670206  306360 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 14:18:35.849445  306360 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:18:35.856509  306360 out.go:235]   - Generating certificates and keys ...
	I0407 14:18:35.856606  306360 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 14:18:35.856681  306360 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 14:18:35.856771  306360 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 14:18:35.856853  306360 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 14:18:35.856956  306360 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 14:18:35.857016  306360 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 14:18:35.857075  306360 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 14:18:35.857126  306360 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 14:18:35.857196  306360 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 14:18:35.857268  306360 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 14:18:35.857304  306360 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 14:18:35.857357  306360 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:18:35.974809  306360 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:18:36.175364  306360 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:18:36.293266  306360 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:18:36.465625  306360 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:18:36.480525  306360 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:18:36.481848  306360 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:18:36.481922  306360 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 14:18:36.613415  306360 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:18:36.615110  306360 out.go:235]   - Booting up control plane ...
	I0407 14:18:36.615269  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:18:36.628134  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:18:36.629532  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:18:36.630589  306360 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:18:36.634513  306360 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 14:19:16.636775  306360 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 14:19:16.637057  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:16.637316  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:19:21.638264  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:21.638529  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:19:31.638701  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:31.638962  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:19:51.638889  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:51.639128  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:20:31.638384  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:20:31.638644  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:20:31.638668  306360 kubeadm.go:310] 
	I0407 14:20:31.638702  306360 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 14:20:31.638742  306360 kubeadm.go:310] 		timed out waiting for the condition
	I0407 14:20:31.638748  306360 kubeadm.go:310] 
	I0407 14:20:31.638775  306360 kubeadm.go:310] 	This error is likely caused by:
	I0407 14:20:31.638810  306360 kubeadm.go:310] 		- The kubelet is not running
	I0407 14:20:31.638898  306360 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 14:20:31.638904  306360 kubeadm.go:310] 
	I0407 14:20:31.638985  306360 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 14:20:31.639023  306360 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 14:20:31.639065  306360 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 14:20:31.639072  306360 kubeadm.go:310] 
	I0407 14:20:31.639203  306360 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 14:20:31.639327  306360 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 14:20:31.639358  306360 kubeadm.go:310] 
	I0407 14:20:31.639513  306360 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 14:20:31.639633  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 14:20:31.639734  306360 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 14:20:31.639862  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 14:20:31.639875  306360 kubeadm.go:310] 
	I0407 14:20:31.640981  306360 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:20:31.641122  306360 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 14:20:31.641237  306360 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0407 14:20:31.641301  306360 kubeadm.go:394] duration metric: took 8m7.609204589s to StartCluster
	I0407 14:20:31.641373  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:20:31.641452  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:20:31.685303  306360 cri.go:89] found id: ""
	I0407 14:20:31.685334  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.685345  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:20:31.685353  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:20:31.685419  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:20:31.719244  306360 cri.go:89] found id: ""
	I0407 14:20:31.719274  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.719285  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:20:31.719293  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:20:31.719367  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:20:31.753252  306360 cri.go:89] found id: ""
	I0407 14:20:31.753282  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.753292  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:20:31.753299  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:20:31.753366  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:20:31.783957  306360 cri.go:89] found id: ""
	I0407 14:20:31.784001  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.784014  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:20:31.784024  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:20:31.784113  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:20:31.819615  306360 cri.go:89] found id: ""
	I0407 14:20:31.819652  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.819660  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:20:31.819666  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:20:31.819730  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:20:31.855903  306360 cri.go:89] found id: ""
	I0407 14:20:31.855942  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.855954  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:20:31.855962  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:20:31.856028  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:20:31.890988  306360 cri.go:89] found id: ""
	I0407 14:20:31.891018  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.891027  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:20:31.891033  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:20:31.891086  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:20:31.924794  306360 cri.go:89] found id: ""
	I0407 14:20:31.924827  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.924837  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:20:31.924861  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:20:31.924876  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:20:31.972904  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:20:31.972948  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:20:31.988056  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:20:31.988090  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:20:32.061617  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:20:32.061657  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:20:32.061672  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:20:32.165554  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:20:32.165600  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0407 14:20:32.208010  306360 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0407 14:20:32.208080  306360 out.go:270] * 
	W0407 14:20:32.208169  306360 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 14:20:32.208186  306360 out.go:270] * 
	W0407 14:20:32.209134  306360 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 14:20:32.213132  306360 out.go:201] 
	W0407 14:20:32.214433  306360 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 14:20:32.214485  306360 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0407 14:20:32.214528  306360 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0407 14:20:32.216101  306360 out.go:201] 
	
	
	==> CRI-O <==
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.779776182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744036174779742972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f6b2584-d8fa-4cae-bb0a-1e0e8fb067c0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.780580140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d69a66c-dcce-4bc1-9326-ef66a6517c84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.780642199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d69a66c-dcce-4bc1-9326-ef66a6517c84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.780685771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2d69a66c-dcce-4bc1-9326-ef66a6517c84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.812490753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29648a9c-9b4a-45ab-9a77-e87770ef12f2 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.812582878Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29648a9c-9b4a-45ab-9a77-e87770ef12f2 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.813460712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86cf5686-7526-4982-8272-23abd081c920 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.813854276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744036174813828778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86cf5686-7526-4982-8272-23abd081c920 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.814423073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3a9bad1-5ced-44c6-9788-6a32f8725893 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.814492224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3a9bad1-5ced-44c6-9788-6a32f8725893 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.814530868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e3a9bad1-5ced-44c6-9788-6a32f8725893 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.846223058Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2a3cc83-2ad3-4f35-ac18-c73197d36adc name=/runtime.v1.RuntimeService/Version
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.846322164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2a3cc83-2ad3-4f35-ac18-c73197d36adc name=/runtime.v1.RuntimeService/Version
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.847672604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34506573-04fa-4350-ac12-1e223d2440c2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.848132417Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744036174848111464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34506573-04fa-4350-ac12-1e223d2440c2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.848690507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a91f9882-802d-4a15-bb6b-4e02e64a3563 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.848755702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a91f9882-802d-4a15-bb6b-4e02e64a3563 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.848790033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a91f9882-802d-4a15-bb6b-4e02e64a3563 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.879506051Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c80aafae-4fb3-4b95-b77a-2edcf43579a8 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.879599567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c80aafae-4fb3-4b95-b77a-2edcf43579a8 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.880535741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6574d617-0d7c-4a15-a803-73d7198d4d3c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.880910174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744036174880881454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6574d617-0d7c-4a15-a803-73d7198d4d3c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.881472951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aef34c46-067c-40a7-8ddd-47b690da3674 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.881541570Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aef34c46-067c-40a7-8ddd-47b690da3674 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:29:34 old-k8s-version-405646 crio[630]: time="2025-04-07 14:29:34.881580240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aef34c46-067c-40a7-8ddd-47b690da3674 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 7 14:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053325] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041250] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr 7 14:12] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.811633] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.641709] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.228522] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.053600] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065282] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.177286] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.157141] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.250668] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +7.115917] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.069863] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.742427] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +13.578914] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 7 14:16] systemd-fstab-generator[5079]: Ignoring "noauto" option for root device
	[Apr 7 14:18] systemd-fstab-generator[5365]: Ignoring "noauto" option for root device
	[  +0.067537] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:29:35 up 17 min,  0 users,  load average: 0.47, 0.15, 0.05
	Linux old-k8s-version-405646 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0002daf60, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc00097c990, 0x24, 0x0, ...)
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]: net.(*Dialer).DialContext(0xc0002694a0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc00097c990, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0005f8a40, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc00097c990, 0x24, 0x60, 0x7fc31bf7d0d8, 0x118, ...)
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]: net/http.(*Transport).dial(0xc000a7a000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc00097c990, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]: net/http.(*Transport).dialConn(0xc000a7a000, 0x4f7fe00, 0xc000120018, 0x0, 0xc000538540, 0x5, 0xc00097c990, 0x24, 0x0, 0xc000a73c20, ...)
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]: net/http.(*Transport).dialConnFor(0xc000a7a000, 0xc000857ef0)
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]: created by net/http.(*Transport).queueForDial
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6551]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 07 14:29:32 old-k8s-version-405646 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 07 14:29:32 old-k8s-version-405646 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 07 14:29:32 old-k8s-version-405646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 07 14:29:32 old-k8s-version-405646 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 07 14:29:32 old-k8s-version-405646 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6561]: I0407 14:29:32.793775    6561 server.go:416] Version: v1.20.0
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6561]: I0407 14:29:32.794104    6561 server.go:837] Client rotation is on, will bootstrap in background
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6561]: I0407 14:29:32.795903    6561 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6561]: W0407 14:29:32.796831    6561 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 07 14:29:32 old-k8s-version-405646 kubelet[6561]: I0407 14:29:32.797029    6561 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405646 -n old-k8s-version-405646
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 2 (231.607417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-405646" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (284.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:30:16.470849  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:30:25.490922  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:31:00.586540  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:31:01.002441  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:32:00.209787  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:32:06.074675  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/no-preload-421325/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:32:47.433527  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:33:18.330160  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:33:22.060230  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/default-k8s-diff-port-718753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:33:22.631183  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:33:29.139275  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/no-preload-421325/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:33:51.955429  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
E0407 14:34:03.661451  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.163:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.163:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405646 -n old-k8s-version-405646
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 2 (235.051694ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-405646" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-405646 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-405646 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.518µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-405646 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 2 (218.950091ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-405646 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-574417 image list                          | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-574417                                  | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-574417                                  | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-574417                                  | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	| delete  | -p embed-certs-574417                                  | embed-certs-574417           | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	| start   | -p newest-cni-541721 --memory=2200 --alsologtostderr   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:15 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | no-preload-421325 image list                           | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-421325                                   | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-421325                                   | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-421325                                   | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	| delete  | -p no-preload-421325                                   | no-preload-421325            | jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	| addons  | enable metrics-server -p newest-cni-541721             | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-718753                           | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | default-k8s-diff-port-718753                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | default-k8s-diff-port-718753                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | default-k8s-diff-port-718753                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-718753 | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | default-k8s-diff-port-718753                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-541721                  | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-541721 --memory=2200 --alsologtostderr   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-541721 image list                           | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	| delete  | -p newest-cni-541721                                   | newest-cni-541721            | jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 14:15:25
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 14:15:25.628644  308831 out.go:345] Setting OutFile to fd 1 ...
	I0407 14:15:25.628943  308831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:15:25.628954  308831 out.go:358] Setting ErrFile to fd 2...
	I0407 14:15:25.628958  308831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:15:25.629163  308831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 14:15:25.629716  308831 out.go:352] Setting JSON to false
	I0407 14:15:25.630676  308831 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":21473,"bootTime":1744013853,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 14:15:25.630790  308831 start.go:139] virtualization: kvm guest
	I0407 14:15:25.632653  308831 out.go:177] * [newest-cni-541721] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 14:15:25.634114  308831 notify.go:220] Checking for updates...
	I0407 14:15:25.634125  308831 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 14:15:25.635477  308831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 14:15:25.636815  308831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:15:25.638126  308831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 14:15:25.639208  308831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 14:15:25.640304  308831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 14:15:25.642142  308831 config.go:182] Loaded profile config "newest-cni-541721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:15:25.642732  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:25.642805  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:25.658473  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45483
	I0407 14:15:25.659219  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:25.659736  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:25.659760  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:25.660180  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:25.660352  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:25.660628  308831 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 14:15:25.660918  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:25.660962  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:25.676620  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42011
	I0407 14:15:25.677061  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:25.677654  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:25.677687  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:25.678106  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:25.678327  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:25.714508  308831 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 14:15:25.715654  308831 start.go:297] selected driver: kvm2
	I0407 14:15:25.715669  308831 start.go:901] validating driver "kvm2" against &{Name:newest-cni-541721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-541721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:15:25.715769  308831 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 14:15:25.716608  308831 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:15:25.716681  308831 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 14:15:25.731568  308831 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 14:15:25.731948  308831 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0407 14:15:25.731981  308831 cni.go:84] Creating CNI manager for ""
	I0407 14:15:25.732021  308831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:15:25.732057  308831 start.go:340] cluster config:
	{Name:newest-cni-541721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-541721 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:15:25.732169  308831 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:15:25.734706  308831 out.go:177] * Starting "newest-cni-541721" primary control-plane node in "newest-cni-541721" cluster
	I0407 14:15:25.736251  308831 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 14:15:25.736285  308831 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0407 14:15:25.736295  308831 cache.go:56] Caching tarball of preloaded images
	I0407 14:15:25.736375  308831 preload.go:172] Found /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0407 14:15:25.736390  308831 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0407 14:15:25.736522  308831 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/config.json ...
	I0407 14:15:25.736737  308831 start.go:360] acquireMachinesLock for newest-cni-541721: {Name:mkbc0d9211b04d7c322a45485d144adcd6ee59fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:15:25.736784  308831 start.go:364] duration metric: took 28.182µs to acquireMachinesLock for "newest-cni-541721"
	I0407 14:15:25.736805  308831 start.go:96] Skipping create...Using existing machine configuration
	I0407 14:15:25.736811  308831 fix.go:54] fixHost starting: 
	I0407 14:15:25.737111  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:25.737147  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:25.751728  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I0407 14:15:25.752219  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:25.752697  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:25.752718  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:25.753019  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:25.753228  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:25.753385  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:25.754926  308831 fix.go:112] recreateIfNeeded on newest-cni-541721: state=Stopped err=<nil>
	I0407 14:15:25.754953  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	W0407 14:15:25.755089  308831 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 14:15:25.757704  308831 out.go:177] * Restarting existing kvm2 VM for "newest-cni-541721" ...
	I0407 14:15:20.896637  306360 cri.go:89] found id: ""
	I0407 14:15:20.896666  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.896673  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:20.896679  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:20.896737  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:20.937796  306360 cri.go:89] found id: ""
	I0407 14:15:20.937828  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.937837  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:20.937843  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:20.937896  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:20.983104  306360 cri.go:89] found id: ""
	I0407 14:15:20.983138  306360 logs.go:282] 0 containers: []
	W0407 14:15:20.983149  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:20.983157  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:20.983222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:21.024555  306360 cri.go:89] found id: ""
	I0407 14:15:21.024591  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.024602  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:21.024609  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:21.024685  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:21.068400  306360 cri.go:89] found id: ""
	I0407 14:15:21.068484  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.068495  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:21.068502  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:21.068572  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:21.107962  306360 cri.go:89] found id: ""
	I0407 14:15:21.107990  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.107998  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:21.108004  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:21.108067  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:21.147955  306360 cri.go:89] found id: ""
	I0407 14:15:21.147981  306360 logs.go:282] 0 containers: []
	W0407 14:15:21.147989  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:21.147999  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:21.148010  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:21.164790  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:21.164818  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:21.236045  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:21.236068  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:21.236081  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:21.313784  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:21.313821  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:21.357183  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:21.357215  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:23.907736  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:23.921413  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:23.921481  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:23.959486  306360 cri.go:89] found id: ""
	I0407 14:15:23.959513  306360 logs.go:282] 0 containers: []
	W0407 14:15:23.959520  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:23.959526  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:23.959585  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:23.992912  306360 cri.go:89] found id: ""
	I0407 14:15:23.992938  306360 logs.go:282] 0 containers: []
	W0407 14:15:23.992946  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:23.992952  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:23.993010  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:24.024279  306360 cri.go:89] found id: ""
	I0407 14:15:24.024308  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.024316  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:24.024323  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:24.024376  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:24.062320  306360 cri.go:89] found id: ""
	I0407 14:15:24.062353  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.062362  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:24.062371  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:24.062432  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:24.122748  306360 cri.go:89] found id: ""
	I0407 14:15:24.122774  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.122782  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:24.122787  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:24.122857  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:24.156773  306360 cri.go:89] found id: ""
	I0407 14:15:24.156803  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.156814  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:24.156831  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:24.156899  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:24.192903  306360 cri.go:89] found id: ""
	I0407 14:15:24.192940  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.192952  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:24.192960  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:24.193017  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:24.228041  306360 cri.go:89] found id: ""
	I0407 14:15:24.228081  306360 logs.go:282] 0 containers: []
	W0407 14:15:24.228093  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:24.228105  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:24.228122  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:24.276177  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:24.276212  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:24.289668  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:24.289701  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:24.356935  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:24.356962  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:24.356981  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:24.442103  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:24.442140  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:25.758835  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Start
	I0407 14:15:25.759008  308831 main.go:141] libmachine: (newest-cni-541721) starting domain...
	I0407 14:15:25.759031  308831 main.go:141] libmachine: (newest-cni-541721) ensuring networks are active...
	I0407 14:15:25.759774  308831 main.go:141] libmachine: (newest-cni-541721) Ensuring network default is active
	I0407 14:15:25.760125  308831 main.go:141] libmachine: (newest-cni-541721) Ensuring network mk-newest-cni-541721 is active
	I0407 14:15:25.760533  308831 main.go:141] libmachine: (newest-cni-541721) getting domain XML...
	I0407 14:15:25.761459  308831 main.go:141] libmachine: (newest-cni-541721) creating domain...
	I0407 14:15:26.961388  308831 main.go:141] libmachine: (newest-cni-541721) waiting for IP...
	I0407 14:15:26.962280  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:26.962679  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:26.962806  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:26.962715  308884 retry.go:31] will retry after 224.710577ms: waiting for domain to come up
	I0407 14:15:27.189309  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:27.189924  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:27.189984  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:27.189909  308884 retry.go:31] will retry after 298.222768ms: waiting for domain to come up
	I0407 14:15:27.489516  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:27.490094  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:27.490131  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:27.490026  308884 retry.go:31] will retry after 465.194234ms: waiting for domain to come up
	I0407 14:15:27.956675  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:27.957258  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:27.957283  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:27.957226  308884 retry.go:31] will retry after 534.441737ms: waiting for domain to come up
	I0407 14:15:28.493247  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:28.493782  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:28.493811  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:28.493750  308884 retry.go:31] will retry after 611.035562ms: waiting for domain to come up
	I0407 14:15:29.106699  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:29.107212  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:29.107234  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:29.107187  308884 retry.go:31] will retry after 705.783816ms: waiting for domain to come up
	I0407 14:15:29.814350  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:29.814874  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:29.814904  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:29.814847  308884 retry.go:31] will retry after 951.819617ms: waiting for domain to come up
	I0407 14:15:26.983553  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:26.996033  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:26.996104  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:27.029665  306360 cri.go:89] found id: ""
	I0407 14:15:27.029692  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.029700  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:27.029705  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:27.029756  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:27.069962  306360 cri.go:89] found id: ""
	I0407 14:15:27.069992  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.070000  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:27.070009  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:27.070074  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:27.112142  306360 cri.go:89] found id: ""
	I0407 14:15:27.112174  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.112182  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:27.112188  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:27.112240  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:27.152647  306360 cri.go:89] found id: ""
	I0407 14:15:27.152675  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.152685  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:27.152691  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:27.152743  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:27.188973  306360 cri.go:89] found id: ""
	I0407 14:15:27.189004  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.189015  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:27.189023  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:27.189099  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:27.228054  306360 cri.go:89] found id: ""
	I0407 14:15:27.228085  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.228095  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:27.228102  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:27.228164  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:27.262089  306360 cri.go:89] found id: ""
	I0407 14:15:27.262121  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.262131  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:27.262152  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:27.262222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:27.298902  306360 cri.go:89] found id: ""
	I0407 14:15:27.298939  306360 logs.go:282] 0 containers: []
	W0407 14:15:27.298951  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:27.298969  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:27.298988  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:27.338649  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:27.338676  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:27.388606  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:27.388653  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:27.403449  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:27.403491  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:27.469414  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:27.469448  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:27.469467  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:30.052698  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:30.071454  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:30.071529  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:30.104690  306360 cri.go:89] found id: ""
	I0407 14:15:30.104723  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.104733  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:30.104741  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:30.104805  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:30.139611  306360 cri.go:89] found id: ""
	I0407 14:15:30.139641  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.139651  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:30.139658  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:30.139724  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:30.173648  306360 cri.go:89] found id: ""
	I0407 14:15:30.173679  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.173691  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:30.173702  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:30.173766  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:30.207015  306360 cri.go:89] found id: ""
	I0407 14:15:30.207045  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.207055  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:30.207062  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:30.207141  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:30.242602  306360 cri.go:89] found id: ""
	I0407 14:15:30.242631  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.242642  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:30.242647  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:30.242698  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:30.275775  306360 cri.go:89] found id: ""
	I0407 14:15:30.275811  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.275824  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:30.275834  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:30.275906  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:30.310674  306360 cri.go:89] found id: ""
	I0407 14:15:30.310710  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.310722  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:30.310734  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:30.310803  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:30.342628  306360 cri.go:89] found id: ""
	I0407 14:15:30.342666  306360 logs.go:282] 0 containers: []
	W0407 14:15:30.342677  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:30.342690  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:30.342704  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:30.390588  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:30.390625  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:30.405143  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:30.405179  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:30.473557  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:30.473590  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:30.473607  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:30.555915  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:30.555961  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:30.768801  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:30.769309  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:30.769368  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:30.769289  308884 retry.go:31] will retry after 1.473723354s: waiting for domain to come up
	I0407 14:15:32.244907  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:32.245389  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:32.245420  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:32.245345  308884 retry.go:31] will retry after 1.499915681s: waiting for domain to come up
	I0407 14:15:33.747106  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:33.747641  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:33.747664  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:33.747621  308884 retry.go:31] will retry after 1.755869329s: waiting for domain to come up
	I0407 14:15:35.505715  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:35.506189  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:35.506224  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:35.506149  308884 retry.go:31] will retry after 1.908921296s: waiting for domain to come up
	I0407 14:15:33.094714  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:33.107818  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:33.107883  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:33.147279  306360 cri.go:89] found id: ""
	I0407 14:15:33.147310  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.147317  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:33.147323  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:33.147374  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:33.182866  306360 cri.go:89] found id: ""
	I0407 14:15:33.182895  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.182903  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:33.182909  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:33.182962  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:33.219845  306360 cri.go:89] found id: ""
	I0407 14:15:33.219881  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.219894  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:33.219903  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:33.219980  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:33.255785  306360 cri.go:89] found id: ""
	I0407 14:15:33.255818  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.255832  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:33.255838  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:33.255888  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:33.296287  306360 cri.go:89] found id: ""
	I0407 14:15:33.296320  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.296331  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:33.296339  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:33.296406  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:33.333123  306360 cri.go:89] found id: ""
	I0407 14:15:33.333156  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.333167  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:33.333174  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:33.333244  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:33.367813  306360 cri.go:89] found id: ""
	I0407 14:15:33.367844  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.367855  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:33.367862  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:33.367930  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:33.401927  306360 cri.go:89] found id: ""
	I0407 14:15:33.401957  306360 logs.go:282] 0 containers: []
	W0407 14:15:33.401964  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:33.401974  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:33.401985  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:33.464350  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:33.464390  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:33.478831  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:33.478866  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:33.554322  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:33.554352  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:33.554370  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:33.632339  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:33.632381  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:37.417168  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:37.417658  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:37.417734  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:37.417635  308884 retry.go:31] will retry after 3.116726133s: waiting for domain to come up
	I0407 14:15:40.537848  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:40.538357  308831 main.go:141] libmachine: (newest-cni-541721) DBG | unable to find current IP address of domain newest-cni-541721 in network mk-newest-cni-541721
	I0407 14:15:40.538386  308831 main.go:141] libmachine: (newest-cni-541721) DBG | I0407 14:15:40.538314  308884 retry.go:31] will retry after 2.7485631s: waiting for domain to come up
	I0407 14:15:36.177635  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:36.191117  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:36.191215  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:36.229342  306360 cri.go:89] found id: ""
	I0407 14:15:36.229373  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.229384  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:36.229391  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:36.229461  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:36.269119  306360 cri.go:89] found id: ""
	I0407 14:15:36.269151  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.269162  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:36.269170  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:36.269236  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:36.312510  306360 cri.go:89] found id: ""
	I0407 14:15:36.312544  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.312556  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:36.312563  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:36.312632  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:36.346706  306360 cri.go:89] found id: ""
	I0407 14:15:36.346741  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.346753  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:36.346762  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:36.346830  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:36.382862  306360 cri.go:89] found id: ""
	I0407 14:15:36.382899  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.382912  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:36.382920  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:36.382989  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:36.424287  306360 cri.go:89] found id: ""
	I0407 14:15:36.424318  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.424329  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:36.424337  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:36.424407  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:36.473843  306360 cri.go:89] found id: ""
	I0407 14:15:36.473891  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.473906  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:36.473916  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:36.474002  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:36.532647  306360 cri.go:89] found id: ""
	I0407 14:15:36.532685  306360 logs.go:282] 0 containers: []
	W0407 14:15:36.532697  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:36.532711  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:36.532727  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:36.599779  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:36.599820  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:36.614047  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:36.614082  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:36.692006  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:36.692030  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:36.692044  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:36.782142  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:36.782196  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:39.320544  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:39.333558  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:39.333630  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:39.367209  306360 cri.go:89] found id: ""
	I0407 14:15:39.367244  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.367255  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:39.367264  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:39.367338  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:39.406298  306360 cri.go:89] found id: ""
	I0407 14:15:39.406326  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.406335  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:39.406342  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:39.406407  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:39.440090  306360 cri.go:89] found id: ""
	I0407 14:15:39.440118  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.440128  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:39.440134  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:39.440197  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:39.473483  306360 cri.go:89] found id: ""
	I0407 14:15:39.473514  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.473527  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:39.473534  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:39.473602  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:39.505571  306360 cri.go:89] found id: ""
	I0407 14:15:39.505599  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.505607  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:39.505613  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:39.505676  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:39.538929  306360 cri.go:89] found id: ""
	I0407 14:15:39.538961  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.538971  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:39.538980  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:39.539045  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:39.572047  306360 cri.go:89] found id: ""
	I0407 14:15:39.572078  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.572089  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:39.572097  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:39.572163  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:39.605781  306360 cri.go:89] found id: ""
	I0407 14:15:39.605812  306360 logs.go:282] 0 containers: []
	W0407 14:15:39.605854  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:39.605868  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:39.605885  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:39.684887  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:39.684931  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:39.725609  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:39.725639  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:39.776592  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:39.776634  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:39.792687  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:39.792719  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:39.859832  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:43.289843  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.290313  308831 main.go:141] libmachine: (newest-cni-541721) found domain IP: 192.168.39.230
	I0407 14:15:43.290342  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has current primary IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.290351  308831 main.go:141] libmachine: (newest-cni-541721) reserving static IP address...
	I0407 14:15:43.290797  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "newest-cni-541721", mac: "52:54:00:e6:36:ee", ip: "192.168.39.230"} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.290844  308831 main.go:141] libmachine: (newest-cni-541721) DBG | skip adding static IP to network mk-newest-cni-541721 - found existing host DHCP lease matching {name: "newest-cni-541721", mac: "52:54:00:e6:36:ee", ip: "192.168.39.230"}
	I0407 14:15:43.290861  308831 main.go:141] libmachine: (newest-cni-541721) reserved static IP address 192.168.39.230 for domain newest-cni-541721
	I0407 14:15:43.290877  308831 main.go:141] libmachine: (newest-cni-541721) waiting for SSH...
	I0407 14:15:43.290888  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Getting to WaitForSSH function...
	I0407 14:15:43.293128  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.293457  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.293482  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.293603  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Using SSH client type: external
	I0407 14:15:43.293630  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Using SSH private key: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa (-rw-------)
	I0407 14:15:43.293658  308831 main.go:141] libmachine: (newest-cni-541721) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0407 14:15:43.293670  308831 main.go:141] libmachine: (newest-cni-541721) DBG | About to run SSH command:
	I0407 14:15:43.293684  308831 main.go:141] libmachine: (newest-cni-541721) DBG | exit 0
	I0407 14:15:43.420319  308831 main.go:141] libmachine: (newest-cni-541721) DBG | SSH cmd err, output: <nil>: 
	I0407 14:15:43.420721  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetConfigRaw
	I0407 14:15:43.421390  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetIP
	I0407 14:15:43.424495  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.424838  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.424863  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.425125  308831 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/config.json ...
	I0407 14:15:43.425347  308831 machine.go:93] provisionDockerMachine start ...
	I0407 14:15:43.425369  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:43.425612  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:43.428118  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.428491  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.428518  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.428670  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:43.428877  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.429081  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.429220  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:43.429407  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:43.429675  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:43.429686  308831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 14:15:43.536790  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 14:15:43.536829  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetMachineName
	I0407 14:15:43.537083  308831 buildroot.go:166] provisioning hostname "newest-cni-541721"
	I0407 14:15:43.537120  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetMachineName
	I0407 14:15:43.537329  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:43.540191  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.540559  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.540585  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.540732  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:43.540899  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.541132  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.541282  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:43.541478  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:43.541679  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:43.541692  308831 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-541721 && echo "newest-cni-541721" | sudo tee /etc/hostname
	I0407 14:15:43.663263  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-541721
	
	I0407 14:15:43.663296  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:43.665913  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.666215  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.666245  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.666389  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:43.666571  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.666726  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:43.666878  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:43.667008  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:43.667209  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:43.667223  308831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-541721' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-541721/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-541721' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:15:43.781703  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:15:43.781735  308831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20598-242355/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-242355/.minikube}
	I0407 14:15:43.781770  308831 buildroot.go:174] setting up certificates
	I0407 14:15:43.781781  308831 provision.go:84] configureAuth start
	I0407 14:15:43.781789  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetMachineName
	I0407 14:15:43.782098  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetIP
	I0407 14:15:43.784807  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.785138  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.785165  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.785310  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:43.787964  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.788465  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:43.788506  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:43.788684  308831 provision.go:143] copyHostCerts
	I0407 14:15:43.788737  308831 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem, removing ...
	I0407 14:15:43.788762  308831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem
	I0407 14:15:43.788828  308831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/key.pem (1679 bytes)
	I0407 14:15:43.788909  308831 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem, removing ...
	I0407 14:15:43.788917  308831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem
	I0407 14:15:43.788941  308831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/ca.pem (1078 bytes)
	I0407 14:15:43.789008  308831 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem, removing ...
	I0407 14:15:43.789016  308831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem
	I0407 14:15:43.789045  308831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-242355/.minikube/cert.pem (1123 bytes)
	I0407 14:15:43.789089  308831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem org=jenkins.newest-cni-541721 san=[127.0.0.1 192.168.39.230 localhost minikube newest-cni-541721]
	I0407 14:15:44.038906  308831 provision.go:177] copyRemoteCerts
	I0407 14:15:44.038972  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:15:44.038998  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.041517  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.041889  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.041921  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.042056  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.042296  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.042445  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.042564  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:44.126574  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0407 14:15:44.150348  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 14:15:44.173128  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:15:44.196028  308831 provision.go:87] duration metric: took 414.219253ms to configureAuth
	I0407 14:15:44.196057  308831 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:15:44.196256  308831 config.go:182] Loaded profile config "newest-cni-541721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:15:44.196365  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.198992  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.199332  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.199359  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.199473  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.199649  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.199841  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.199983  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.200187  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:44.200392  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:44.200406  308831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0407 14:15:44.425698  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0407 14:15:44.425730  308831 machine.go:96] duration metric: took 1.00036936s to provisionDockerMachine
	I0407 14:15:44.425742  308831 start.go:293] postStartSetup for "newest-cni-541721" (driver="kvm2")
	I0407 14:15:44.425753  308831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:15:44.425769  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.426237  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:15:44.426282  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.428748  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.429105  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.429137  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.429312  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.429508  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.429691  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.429839  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:44.514924  308831 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:15:44.519014  308831 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:15:44.519041  308831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/addons for local assets ...
	I0407 14:15:44.519105  308831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-242355/.minikube/files for local assets ...
	I0407 14:15:44.519203  308831 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem -> 2495162.pem in /etc/ssl/certs
	I0407 14:15:44.519338  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:15:44.528306  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:15:44.552208  308831 start.go:296] duration metric: took 126.448126ms for postStartSetup
	I0407 14:15:44.552258  308831 fix.go:56] duration metric: took 18.815446562s for fixHost
	I0407 14:15:44.552283  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.555012  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.555411  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.555436  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.555613  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.555777  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.555921  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.556086  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.556274  308831 main.go:141] libmachine: Using SSH client type: native
	I0407 14:15:44.556581  308831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0407 14:15:44.556596  308831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 14:15:44.665315  308831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744035344.637882085
	
	I0407 14:15:44.665344  308831 fix.go:216] guest clock: 1744035344.637882085
	I0407 14:15:44.665352  308831 fix.go:229] Guest: 2025-04-07 14:15:44.637882085 +0000 UTC Remote: 2025-04-07 14:15:44.552262543 +0000 UTC m=+18.960633497 (delta=85.619542ms)
	I0407 14:15:44.665378  308831 fix.go:200] guest clock delta is within tolerance: 85.619542ms
	I0407 14:15:44.665385  308831 start.go:83] releasing machines lock for "newest-cni-541721", held for 18.928588169s
	I0407 14:15:44.665411  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.665665  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetIP
	I0407 14:15:44.668359  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.668769  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.668796  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.669001  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.669473  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.669663  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:44.669764  308831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 14:15:44.669821  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.669881  308831 ssh_runner.go:195] Run: cat /version.json
	I0407 14:15:44.669903  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:44.672537  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.672728  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.672882  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.672910  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.673079  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:44.673108  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:44.673126  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.673306  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.673329  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:44.673471  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:44.673479  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.673639  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:44.673629  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:44.673808  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:44.772603  308831 ssh_runner.go:195] Run: systemctl --version
	I0407 14:15:44.778824  308831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0407 14:15:44.927200  308831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 14:15:44.934229  308831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 14:15:44.934295  308831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 14:15:44.949862  308831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 14:15:44.949886  308831 start.go:495] detecting cgroup driver to use...
	I0407 14:15:44.949946  308831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 14:15:44.965426  308831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:15:44.978798  308831 docker.go:217] disabling cri-docker service (if available) ...
	I0407 14:15:44.978861  308831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0407 14:15:44.991899  308831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0407 14:15:45.004571  308831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0407 14:15:45.128809  308831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0407 14:15:45.285871  308831 docker.go:233] disabling docker service ...
	I0407 14:15:45.285943  308831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0407 14:15:45.300353  308831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0407 14:15:45.313521  308831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0407 14:15:45.446753  308831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0407 14:15:45.566017  308831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0407 14:15:45.581006  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 14:15:45.599340  308831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0407 14:15:45.599422  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.609965  308831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0407 14:15:45.610059  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.620860  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:42.361106  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:42.374378  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:42.374461  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:42.409267  306360 cri.go:89] found id: ""
	I0407 14:15:42.409296  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.409304  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:42.409309  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:42.409361  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:42.442512  306360 cri.go:89] found id: ""
	I0407 14:15:42.442540  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.442548  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:42.442554  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:42.442603  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:42.476016  306360 cri.go:89] found id: ""
	I0407 14:15:42.476044  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.476055  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:42.476063  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:42.476127  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:42.507103  306360 cri.go:89] found id: ""
	I0407 14:15:42.507138  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.507145  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:42.507151  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:42.507205  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:42.543140  306360 cri.go:89] found id: ""
	I0407 14:15:42.543167  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.543178  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:42.543185  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:42.543260  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:42.583718  306360 cri.go:89] found id: ""
	I0407 14:15:42.583749  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.583756  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:42.583764  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:42.583826  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:42.617614  306360 cri.go:89] found id: ""
	I0407 14:15:42.617649  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.617660  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:42.617668  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:42.617736  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:42.652193  306360 cri.go:89] found id: ""
	I0407 14:15:42.652220  306360 logs.go:282] 0 containers: []
	W0407 14:15:42.652227  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:42.652237  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:42.652250  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:42.700778  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:42.700817  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:42.713926  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:42.713958  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:42.781552  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:42.781577  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:42.781590  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:42.857460  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:42.857502  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:45.397689  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:45.416022  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:45.416089  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:45.457038  306360 cri.go:89] found id: ""
	I0407 14:15:45.457078  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.457089  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:45.457097  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:45.457168  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:45.491527  306360 cri.go:89] found id: ""
	I0407 14:15:45.491559  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.491570  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:45.491578  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:45.491647  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:45.524296  306360 cri.go:89] found id: ""
	I0407 14:15:45.524333  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.524344  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:45.524352  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:45.524416  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:45.562418  306360 cri.go:89] found id: ""
	I0407 14:15:45.562450  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.562461  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:45.562469  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:45.562537  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:45.601384  306360 cri.go:89] found id: ""
	I0407 14:15:45.601409  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.601417  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:45.601423  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:45.601471  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:45.638899  306360 cri.go:89] found id: ""
	I0407 14:15:45.638924  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.638933  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:45.638939  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:45.639005  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:45.675994  306360 cri.go:89] found id: ""
	I0407 14:15:45.676031  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.676047  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:45.676064  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:45.676128  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:45.714599  306360 cri.go:89] found id: ""
	I0407 14:15:45.714626  306360 logs.go:282] 0 containers: []
	W0407 14:15:45.714637  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:45.714648  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:45.714665  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:45.780477  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:45.780527  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:45.794822  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:45.794859  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:45.866895  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:45.866921  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:45.866944  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:45.631474  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.644263  308831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 14:15:45.658794  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.670123  308831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.689249  308831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0407 14:15:45.699508  308831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 14:15:45.709814  308831 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 14:15:45.709869  308831 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 14:15:45.723859  308831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 14:15:45.733593  308831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:15:45.849319  308831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0407 14:15:45.947041  308831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0407 14:15:45.947134  308831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0407 14:15:45.952013  308831 start.go:563] Will wait 60s for crictl version
	I0407 14:15:45.952094  308831 ssh_runner.go:195] Run: which crictl
	I0407 14:15:45.956063  308831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 14:15:46.003168  308831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0407 14:15:46.003266  308831 ssh_runner.go:195] Run: crio --version
	I0407 14:15:46.030604  308831 ssh_runner.go:195] Run: crio --version
	I0407 14:15:46.060415  308831 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0407 14:15:46.061532  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetIP
	I0407 14:15:46.064257  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:46.064649  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:46.064686  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:46.064942  308831 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0407 14:15:46.069108  308831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:15:46.082697  308831 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0407 14:15:46.083791  308831 kubeadm.go:883] updating cluster {Name:newest-cni-541721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-5
41721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 14:15:46.083896  308831 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0407 14:15:46.083950  308831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:15:46.117284  308831 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0407 14:15:46.117364  308831 ssh_runner.go:195] Run: which lz4
	I0407 14:15:46.121377  308831 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 14:15:46.125460  308831 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 14:15:46.125488  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0407 14:15:47.523799  308831 crio.go:462] duration metric: took 1.402446769s to copy over tarball
	I0407 14:15:47.523885  308831 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 14:15:49.780413  308831 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256487333s)
	I0407 14:15:49.780472  308831 crio.go:469] duration metric: took 2.256631266s to extract the tarball
	I0407 14:15:49.780484  308831 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 14:15:49.817617  308831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0407 14:15:49.861772  308831 crio.go:514] all images are preloaded for cri-o runtime.
	I0407 14:15:49.861798  308831 cache_images.go:84] Images are preloaded, skipping loading
	I0407 14:15:49.861811  308831 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.32.2 crio true true} ...
	I0407 14:15:49.861914  308831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-541721 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-541721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 14:15:49.861982  308831 ssh_runner.go:195] Run: crio config
	I0407 14:15:49.906766  308831 cni.go:84] Creating CNI manager for ""
	I0407 14:15:49.906790  308831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:15:49.906799  308831 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0407 14:15:49.906821  308831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-541721 NodeName:newest-cni-541721 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 14:15:49.906963  308831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-541721"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.230"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 14:15:49.907028  308831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 14:15:49.917114  308831 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 14:15:49.917177  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 14:15:49.927296  308831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0407 14:15:49.945058  308831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 14:15:49.962171  308831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0407 14:15:49.981232  308831 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0407 14:15:49.985429  308831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:15:49.997919  308831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:15:50.112228  308831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:15:50.138008  308831 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721 for IP: 192.168.39.230
	I0407 14:15:50.138038  308831 certs.go:194] generating shared ca certs ...
	I0407 14:15:50.138056  308831 certs.go:226] acquiring lock for ca certs: {Name:mk1da0e2436b5b22d130d00c7c348c272ee34f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:15:50.138217  308831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key
	I0407 14:15:50.138257  308831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key
	I0407 14:15:50.138269  308831 certs.go:256] generating profile certs ...
	I0407 14:15:50.138383  308831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/client.key
	I0407 14:15:50.138463  308831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/apiserver.key.ae70fd14
	I0407 14:15:50.138512  308831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/proxy-client.key
	I0407 14:15:50.138669  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem (1338 bytes)
	W0407 14:15:50.138721  308831 certs.go:480] ignoring /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516_empty.pem, impossibly tiny 0 bytes
	I0407 14:15:50.138735  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 14:15:50.138774  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/ca.pem (1078 bytes)
	I0407 14:15:50.138805  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/cert.pem (1123 bytes)
	I0407 14:15:50.138835  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/certs/key.pem (1679 bytes)
	I0407 14:15:50.138899  308831 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem (1708 bytes)
	I0407 14:15:50.139675  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 14:15:50.197283  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 14:15:50.242193  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 14:15:50.269592  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 14:15:50.295620  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0407 14:15:50.326901  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 14:15:50.350149  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 14:15:50.373570  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/newest-cni-541721/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 14:15:50.396967  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/certs/249516.pem --> /usr/share/ca-certificates/249516.pem (1338 bytes)
	I0407 14:15:50.419713  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/ssl/certs/2495162.pem --> /usr/share/ca-certificates/2495162.pem (1708 bytes)
	I0407 14:15:50.443345  308831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-242355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 14:15:50.466277  308831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 14:15:50.482772  308831 ssh_runner.go:195] Run: openssl version
	I0407 14:15:50.488692  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/249516.pem && ln -fs /usr/share/ca-certificates/249516.pem /etc/ssl/certs/249516.pem"
	I0407 14:15:50.499480  308831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/249516.pem
	I0407 14:15:50.504091  308831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 13:03 /usr/share/ca-certificates/249516.pem
	I0407 14:15:50.504182  308831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/249516.pem
	I0407 14:15:50.510343  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/249516.pem /etc/ssl/certs/51391683.0"
	I0407 14:15:50.521521  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2495162.pem && ln -fs /usr/share/ca-certificates/2495162.pem /etc/ssl/certs/2495162.pem"
	I0407 14:15:50.532621  308831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2495162.pem
	I0407 14:15:50.537354  308831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 13:03 /usr/share/ca-certificates/2495162.pem
	I0407 14:15:50.537410  308831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2495162.pem
	I0407 14:15:50.543022  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2495162.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 14:15:50.554034  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 14:15:50.564979  308831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:15:50.569666  308831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:15:50.569727  308831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:15:50.575423  308831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 14:15:50.586213  308831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 14:15:50.590961  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 14:15:50.596887  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 14:15:50.602578  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 14:15:50.608528  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 14:15:50.614421  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 14:15:50.620333  308831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 14:15:50.626231  308831 kubeadm.go:392] StartCluster: {Name:newest-cni-541721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-5417
21 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:15:50.626391  308831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0407 14:15:50.626505  308831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 14:15:45.951585  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:45.951615  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:48.488815  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:48.507944  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:48.508026  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:48.551257  306360 cri.go:89] found id: ""
	I0407 14:15:48.551300  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.551314  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:48.551324  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:48.551402  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:48.595600  306360 cri.go:89] found id: ""
	I0407 14:15:48.595626  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.595634  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:48.595640  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:48.595704  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:48.639221  306360 cri.go:89] found id: ""
	I0407 14:15:48.639248  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.639255  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:48.639261  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:48.639326  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:48.680520  306360 cri.go:89] found id: ""
	I0407 14:15:48.680562  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.680575  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:48.680585  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:48.680679  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:48.728260  306360 cri.go:89] found id: ""
	I0407 14:15:48.728300  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.728315  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:48.728326  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:48.728410  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:48.773839  306360 cri.go:89] found id: ""
	I0407 14:15:48.773875  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.773886  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:48.773893  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:48.773955  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:48.814915  306360 cri.go:89] found id: ""
	I0407 14:15:48.814947  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.814957  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:48.814963  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:48.815028  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:48.860191  306360 cri.go:89] found id: ""
	I0407 14:15:48.860225  306360 logs.go:282] 0 containers: []
	W0407 14:15:48.860245  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:48.860258  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:48.860273  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:48.922676  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:48.922714  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:48.939569  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:48.939618  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:49.016199  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:49.016225  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:49.016248  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:49.097968  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:49.098013  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:50.663771  308831 cri.go:89] found id: ""
	I0407 14:15:50.663873  308831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 14:15:50.674085  308831 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 14:15:50.674107  308831 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 14:15:50.674160  308831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 14:15:50.683827  308831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 14:15:50.684345  308831 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-541721" does not appear in /home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:15:50.684567  308831 kubeconfig.go:62] /home/jenkins/minikube-integration/20598-242355/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-541721" cluster setting kubeconfig missing "newest-cni-541721" context setting]
	I0407 14:15:50.684927  308831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/kubeconfig: {Name:mkef4208e7f217ec5ec7c15cd00232eac7047b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:15:50.686121  308831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 14:15:50.695269  308831 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.230
	I0407 14:15:50.695302  308831 kubeadm.go:1160] stopping kube-system containers ...
	I0407 14:15:50.695314  308831 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0407 14:15:50.695355  308831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0407 14:15:50.736911  308831 cri.go:89] found id: ""
	I0407 14:15:50.737008  308831 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0407 14:15:50.753425  308831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:15:50.765206  308831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:15:50.765225  308831 kubeadm.go:157] found existing configuration files:
	
	I0407 14:15:50.765267  308831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:15:50.774388  308831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:15:50.774441  308831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:15:50.783710  308831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:15:50.792577  308831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:15:50.792633  308831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:15:50.802813  308831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:15:50.811735  308831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:15:50.811788  308831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:15:50.820555  308831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:15:50.829705  308831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:15:50.829752  308831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:15:50.839810  308831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 14:15:50.849133  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:50.964318  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:52.072919  308831 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.108554265s)
	I0407 14:15:52.072960  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:52.328909  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:52.421835  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:52.499558  308831 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:15:52.499668  308831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:53.000158  308831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:53.500670  308831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:53.520865  308831 api_server.go:72] duration metric: took 1.021307622s to wait for apiserver process to appear ...
	I0407 14:15:53.520900  308831 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:15:53.520929  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:51.641164  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:51.655473  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:51.655548  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:51.690008  306360 cri.go:89] found id: ""
	I0407 14:15:51.690036  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.690047  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:51.690055  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:51.690118  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:51.728115  306360 cri.go:89] found id: ""
	I0407 14:15:51.728141  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.728150  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:51.728157  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:51.728222  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:51.764117  306360 cri.go:89] found id: ""
	I0407 14:15:51.764156  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.764168  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:51.764180  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:51.764243  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:51.801243  306360 cri.go:89] found id: ""
	I0407 14:15:51.801279  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.801291  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:51.801299  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:51.801363  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:51.838262  306360 cri.go:89] found id: ""
	I0407 14:15:51.838292  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.838302  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:51.838310  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:51.838378  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:51.880251  306360 cri.go:89] found id: ""
	I0407 14:15:51.880284  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.880294  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:51.880302  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:51.880373  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:51.922175  306360 cri.go:89] found id: ""
	I0407 14:15:51.922203  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.922213  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:51.922220  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:51.922291  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:51.963932  306360 cri.go:89] found id: ""
	I0407 14:15:51.963960  306360 logs.go:282] 0 containers: []
	W0407 14:15:51.963970  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:51.963985  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:51.964000  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:52.046274  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:52.046322  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:52.093979  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:52.094019  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:52.148613  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:52.148660  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:52.162525  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:52.162559  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:52.239788  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:54.740063  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:54.757191  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:54.757267  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:54.789524  306360 cri.go:89] found id: ""
	I0407 14:15:54.789564  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.789575  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:54.789584  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:54.789646  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:54.823746  306360 cri.go:89] found id: ""
	I0407 14:15:54.823785  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.823797  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:54.823805  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:54.823875  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:54.861371  306360 cri.go:89] found id: ""
	I0407 14:15:54.861406  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.861417  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:54.861424  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:54.861486  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:54.896286  306360 cri.go:89] found id: ""
	I0407 14:15:54.896318  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.896327  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:54.896334  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:54.896402  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:54.938594  306360 cri.go:89] found id: ""
	I0407 14:15:54.938632  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.938643  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:54.938651  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:54.938722  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:54.971701  306360 cri.go:89] found id: ""
	I0407 14:15:54.971737  306360 logs.go:282] 0 containers: []
	W0407 14:15:54.971745  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:54.971751  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:54.971809  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:55.008651  306360 cri.go:89] found id: ""
	I0407 14:15:55.008682  306360 logs.go:282] 0 containers: []
	W0407 14:15:55.008693  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:55.008700  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:55.008768  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:55.043829  306360 cri.go:89] found id: ""
	I0407 14:15:55.043860  306360 logs.go:282] 0 containers: []
	W0407 14:15:55.043868  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:55.043879  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:55.043899  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:55.094682  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:55.094720  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:15:55.109798  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:55.109855  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:55.187514  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:55.187540  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:55.187555  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:55.273313  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:55.273360  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:56.021402  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 14:15:56.021428  308831 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 14:15:56.021442  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:56.066617  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 14:15:56.066650  308831 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 14:15:56.521245  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:56.526043  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:15:56.526070  308831 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:15:57.021581  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:57.026339  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:15:57.026365  308831 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:15:57.521022  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:57.525667  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I0407 14:15:57.532348  308831 api_server.go:141] control plane version: v1.32.2
	I0407 14:15:57.532377  308831 api_server.go:131] duration metric: took 4.011467673s to wait for apiserver health ...
	I0407 14:15:57.532391  308831 cni.go:84] Creating CNI manager for ""
	I0407 14:15:57.532400  308831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 14:15:57.534300  308831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 14:15:57.535520  308831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 14:15:57.547844  308831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 14:15:57.567595  308831 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:15:57.571906  308831 system_pods.go:59] 8 kube-system pods found
	I0407 14:15:57.571945  308831 system_pods.go:61] "coredns-668d6bf9bc-kwfnj" [c312b7f9-1687-4be6-ad08-27dca9ba736f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0407 14:15:57.571953  308831 system_pods.go:61] "etcd-newest-cni-541721" [42628491-612b-4295-88bb-07ac9eb7ab9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0407 14:15:57.571961  308831 system_pods.go:61] "kube-apiserver-newest-cni-541721" [07768ac0-2f44-4b96-bfe5-acfb91362045] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0407 14:15:57.571967  308831 system_pods.go:61] "kube-controller-manager-newest-cni-541721" [83a4f8c5-c745-47a9-9cc6-2456566c28a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 14:15:57.571978  308831 system_pods.go:61] "kube-proxy-crp62" [47febbe3-a277-4779-aee8-ba1c5433f21d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0407 14:15:57.571986  308831 system_pods.go:61] "kube-scheduler-newest-cni-541721" [5b4ee840-ac6a-4214-9179-5e6d5af9f764] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0407 14:15:57.571991  308831 system_pods.go:61] "metrics-server-f79f97bbb-kc7kt" [2484cb12-61a6-4de3-8dd6-bfcb4dcb5baa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 14:15:57.571999  308831 system_pods.go:61] "storage-provisioner" [e41f18c2-1442-463f-ae4b-bc47b254aa7a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0407 14:15:57.572004  308831 system_pods.go:74] duration metric: took 4.389672ms to wait for pod list to return data ...
	I0407 14:15:57.572014  308831 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:15:57.575009  308831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:15:57.575029  308831 node_conditions.go:123] node cpu capacity is 2
	I0407 14:15:57.575040  308831 node_conditions.go:105] duration metric: took 3.021612ms to run NodePressure ...
	I0407 14:15:57.575056  308831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:15:57.880816  308831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 14:15:57.894579  308831 ops.go:34] apiserver oom_adj: -16
	I0407 14:15:57.894607  308831 kubeadm.go:597] duration metric: took 7.220492712s to restartPrimaryControlPlane
	I0407 14:15:57.894619  308831 kubeadm.go:394] duration metric: took 7.268398637s to StartCluster
	I0407 14:15:57.894641  308831 settings.go:142] acquiring lock: {Name:mk4f0a46db7c57f47f856bd845390df879e08200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:15:57.894822  308831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 14:15:57.896037  308831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-242355/kubeconfig: {Name:mkef4208e7f217ec5ec7c15cd00232eac7047b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:15:57.896384  308831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0407 14:15:57.896474  308831 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 14:15:57.896568  308831 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-541721"
	I0407 14:15:57.896589  308831 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-541721"
	W0407 14:15:57.896596  308831 addons.go:247] addon storage-provisioner should already be in state true
	I0407 14:15:57.896613  308831 addons.go:69] Setting default-storageclass=true in profile "newest-cni-541721"
	I0407 14:15:57.896638  308831 host.go:66] Checking if "newest-cni-541721" exists ...
	I0407 14:15:57.896625  308831 addons.go:69] Setting dashboard=true in profile "newest-cni-541721"
	I0407 14:15:57.896642  308831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-541721"
	I0407 14:15:57.896665  308831 addons.go:238] Setting addon dashboard=true in "newest-cni-541721"
	W0407 14:15:57.896675  308831 addons.go:247] addon dashboard should already be in state true
	I0407 14:15:57.896682  308831 addons.go:69] Setting metrics-server=true in profile "newest-cni-541721"
	I0407 14:15:57.896709  308831 host.go:66] Checking if "newest-cni-541721" exists ...
	I0407 14:15:57.896720  308831 addons.go:238] Setting addon metrics-server=true in "newest-cni-541721"
	W0407 14:15:57.896730  308831 addons.go:247] addon metrics-server should already be in state true
	I0407 14:15:57.896761  308831 host.go:66] Checking if "newest-cni-541721" exists ...
	I0407 14:15:57.897130  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.897144  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.897129  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.897179  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.897224  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.897170  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.897247  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.897289  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.897439  308831 config.go:182] Loaded profile config "newest-cni-541721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 14:15:57.898160  308831 out.go:177] * Verifying Kubernetes components...
	I0407 14:15:57.899427  308831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:15:57.914645  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34255
	I0407 14:15:57.914658  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I0407 14:15:57.915088  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.915221  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.915772  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.915789  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.915919  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.915929  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.916179  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.916232  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.916344  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.916804  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.916846  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.917048  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0407 14:15:57.917542  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.918163  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.918178  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.918569  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.919092  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.919123  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.919192  308831 addons.go:238] Setting addon default-storageclass=true in "newest-cni-541721"
	W0407 14:15:57.919205  308831 addons.go:247] addon default-storageclass should already be in state true
	I0407 14:15:57.919233  308831 host.go:66] Checking if "newest-cni-541721" exists ...
	I0407 14:15:57.919576  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.919605  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.920769  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0407 14:15:57.921236  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.921729  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.921752  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.922088  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.922572  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.922608  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.937572  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0407 14:15:57.937695  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42421
	I0407 14:15:57.938194  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.938660  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34627
	I0407 14:15:57.938863  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.938887  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.938963  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.939251  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.939620  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0407 14:15:57.939642  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.939848  308831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 14:15:57.939900  308831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 14:15:57.940021  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.940071  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.940086  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.940288  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.940312  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.940532  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.940651  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.940673  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.940694  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.940997  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.941226  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.941293  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.941418  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.943066  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:57.943556  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:57.944233  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:57.945868  308831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:15:57.945873  308831 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0407 14:15:57.945925  308831 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0407 14:15:57.947501  308831 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 14:15:57.947525  308831 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 14:15:57.947549  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:57.947592  308831 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 14:15:57.947606  308831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 14:15:57.947682  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:57.949194  308831 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0407 14:15:57.950596  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0407 14:15:57.950612  308831 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0407 14:15:57.950633  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:57.951106  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.951518  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:57.951536  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.951608  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.951691  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:57.951866  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:57.952012  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:57.952224  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:57.952336  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:57.952370  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.952455  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:57.952697  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:57.952854  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:57.952995  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:57.954108  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.954455  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:57.954482  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.954659  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:57.954827  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:57.954967  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:57.955093  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:57.975194  308831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0407 14:15:57.975616  308831 main.go:141] libmachine: () Calling .GetVersion
	I0407 14:15:57.976107  308831 main.go:141] libmachine: Using API Version  1
	I0407 14:15:57.976139  308831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 14:15:57.976544  308831 main.go:141] libmachine: () Calling .GetMachineName
	I0407 14:15:57.976751  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetState
	I0407 14:15:57.978595  308831 main.go:141] libmachine: (newest-cni-541721) Calling .DriverName
	I0407 14:15:57.978824  308831 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 14:15:57.978842  308831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 14:15:57.978862  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHHostname
	I0407 14:15:57.982043  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.982380  308831 main.go:141] libmachine: (newest-cni-541721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:36:ee", ip: ""} in network mk-newest-cni-541721: {Iface:virbr1 ExpiryTime:2025-04-07 15:15:37 +0000 UTC Type:0 Mac:52:54:00:e6:36:ee Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-541721 Clientid:01:52:54:00:e6:36:ee}
	I0407 14:15:57.982410  308831 main.go:141] libmachine: (newest-cni-541721) DBG | domain newest-cni-541721 has defined IP address 192.168.39.230 and MAC address 52:54:00:e6:36:ee in network mk-newest-cni-541721
	I0407 14:15:57.982678  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHPort
	I0407 14:15:57.982840  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHKeyPath
	I0407 14:15:57.982966  308831 main.go:141] libmachine: (newest-cni-541721) Calling .GetSSHUsername
	I0407 14:15:57.983081  308831 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/newest-cni-541721/id_rsa Username:docker}
	I0407 14:15:58.102404  308831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:15:58.120015  308831 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:15:58.120102  308831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:58.135300  308831 api_server.go:72] duration metric: took 238.836482ms to wait for apiserver process to appear ...
	I0407 14:15:58.135329  308831 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:15:58.135349  308831 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0407 14:15:58.141206  308831 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I0407 14:15:58.142587  308831 api_server.go:141] control plane version: v1.32.2
	I0407 14:15:58.142606  308831 api_server.go:131] duration metric: took 7.270895ms to wait for apiserver health ...
	I0407 14:15:58.142614  308831 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:15:58.146900  308831 system_pods.go:59] 8 kube-system pods found
	I0407 14:15:58.146926  308831 system_pods.go:61] "coredns-668d6bf9bc-kwfnj" [c312b7f9-1687-4be6-ad08-27dca9ba736f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0407 14:15:58.146935  308831 system_pods.go:61] "etcd-newest-cni-541721" [42628491-612b-4295-88bb-07ac9eb7ab9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0407 14:15:58.146943  308831 system_pods.go:61] "kube-apiserver-newest-cni-541721" [07768ac0-2f44-4b96-bfe5-acfb91362045] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0407 14:15:58.146948  308831 system_pods.go:61] "kube-controller-manager-newest-cni-541721" [83a4f8c5-c745-47a9-9cc6-2456566c28a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 14:15:58.146955  308831 system_pods.go:61] "kube-proxy-crp62" [47febbe3-a277-4779-aee8-ba1c5433f21d] Running
	I0407 14:15:58.146961  308831 system_pods.go:61] "kube-scheduler-newest-cni-541721" [5b4ee840-ac6a-4214-9179-5e6d5af9f764] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0407 14:15:58.146966  308831 system_pods.go:61] "metrics-server-f79f97bbb-kc7kt" [2484cb12-61a6-4de3-8dd6-bfcb4dcb5baa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0407 14:15:58.146972  308831 system_pods.go:61] "storage-provisioner" [e41f18c2-1442-463f-ae4b-bc47b254aa7a] Running
	I0407 14:15:58.146978  308831 system_pods.go:74] duration metric: took 4.358597ms to wait for pod list to return data ...
	I0407 14:15:58.146986  308831 default_sa.go:34] waiting for default service account to be created ...
	I0407 14:15:58.150282  308831 default_sa.go:45] found service account: "default"
	I0407 14:15:58.150299  308831 default_sa.go:55] duration metric: took 3.303841ms for default service account to be created ...
	I0407 14:15:58.150309  308831 kubeadm.go:582] duration metric: took 253.863257ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0407 14:15:58.150322  308831 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:15:58.153173  308831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:15:58.153197  308831 node_conditions.go:123] node cpu capacity is 2
	I0407 14:15:58.153211  308831 node_conditions.go:105] duration metric: took 2.884813ms to run NodePressure ...
	I0407 14:15:58.153224  308831 start.go:241] waiting for startup goroutines ...
	I0407 14:15:58.193220  308831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 14:15:58.219746  308831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 14:15:58.279762  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0407 14:15:58.279792  308831 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0407 14:15:58.310829  308831 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 14:15:58.310854  308831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0407 14:15:58.365195  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0407 14:15:58.365223  308831 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0407 14:15:58.418268  308831 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 14:15:58.418311  308831 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 14:15:58.452087  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0407 14:15:58.452125  308831 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0407 14:15:58.472397  308831 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 14:15:58.472435  308831 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 14:15:58.493767  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0407 14:15:58.493792  308831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0407 14:15:58.538632  308831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 14:15:58.591626  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0407 14:15:58.591661  308831 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0407 14:15:58.674454  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0407 14:15:58.674490  308831 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0407 14:15:58.705316  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0407 14:15:58.705355  308831 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0407 14:15:58.728819  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0407 14:15:58.728849  308831 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0407 14:15:58.748297  308831 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 14:15:58.748328  308831 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0407 14:15:58.771377  308831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0407 14:15:59.673041  308831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.453258343s)
	I0407 14:15:59.673107  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.673119  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.673482  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.673507  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.673518  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.673527  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.673768  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.673788  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.673805  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:15:59.674036  308831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.480774359s)
	I0407 14:15:59.674082  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.674098  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.674344  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.674361  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.674372  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.674387  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.674683  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.674696  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.674710  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:15:59.695131  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:15:59.695152  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:15:59.695501  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:15:59.695523  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:15:59.695537  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:16:00.090200  308831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.55151201s)
	I0407 14:16:00.090258  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:16:00.090283  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:16:00.090628  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:16:00.090645  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:16:00.090662  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:16:00.090672  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:16:00.090678  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:16:00.090980  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:16:00.090989  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:16:00.090997  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:16:00.091007  308831 addons.go:479] Verifying addon metrics-server=true in "newest-cni-541721"
	I0407 14:16:00.245449  308831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.473999327s)
	I0407 14:16:00.245510  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:16:00.245527  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:16:00.245797  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:16:00.245858  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:16:00.245882  308831 main.go:141] libmachine: Making call to close driver server
	I0407 14:16:00.245894  308831 main.go:141] libmachine: (newest-cni-541721) Calling .Close
	I0407 14:16:00.245895  308831 main.go:141] libmachine: (newest-cni-541721) DBG | Closing plugin on server side
	I0407 14:16:00.246148  308831 main.go:141] libmachine: Successfully made call to close driver server
	I0407 14:16:00.246165  308831 main.go:141] libmachine: Making call to close connection to plugin binary
	I0407 14:16:00.247614  308831 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-541721 addons enable metrics-server
	
	I0407 14:16:00.248959  308831 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0407 14:16:00.250078  308831 addons.go:514] duration metric: took 2.353612079s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0407 14:16:00.250126  308831 start.go:246] waiting for cluster config update ...
	I0407 14:16:00.250153  308831 start.go:255] writing updated cluster config ...
	I0407 14:16:00.250500  308831 ssh_runner.go:195] Run: rm -f paused
	I0407 14:16:00.299045  308831 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 14:16:00.300679  308831 out.go:177] * Done! kubectl is now configured to use "newest-cni-541721" cluster and "default" namespace by default
	I0407 14:15:57.811712  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:15:57.825529  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:15:57.825597  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:15:57.863098  306360 cri.go:89] found id: ""
	I0407 14:15:57.863139  306360 logs.go:282] 0 containers: []
	W0407 14:15:57.863152  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:15:57.863160  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:15:57.863231  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:15:57.902011  306360 cri.go:89] found id: ""
	I0407 14:15:57.902049  306360 logs.go:282] 0 containers: []
	W0407 14:15:57.902059  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:15:57.902067  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:15:57.902134  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:15:57.965448  306360 cri.go:89] found id: ""
	I0407 14:15:57.965475  306360 logs.go:282] 0 containers: []
	W0407 14:15:57.965485  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:15:57.965492  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:15:57.965554  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:15:58.012478  306360 cri.go:89] found id: ""
	I0407 14:15:58.012508  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.012519  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:15:58.012528  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:15:58.012591  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:15:58.046324  306360 cri.go:89] found id: ""
	I0407 14:15:58.046352  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.046359  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:15:58.046365  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:15:58.046416  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:15:58.082655  306360 cri.go:89] found id: ""
	I0407 14:15:58.082690  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.082701  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:15:58.082771  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:15:58.082845  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:15:58.117888  306360 cri.go:89] found id: ""
	I0407 14:15:58.117917  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.117929  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:15:58.117936  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:15:58.118002  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:15:58.158074  306360 cri.go:89] found id: ""
	I0407 14:15:58.158100  306360 logs.go:282] 0 containers: []
	W0407 14:15:58.158110  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:15:58.158122  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:15:58.158140  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:15:58.250799  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:15:58.250823  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:15:58.250839  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:15:58.331250  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:15:58.331289  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:15:58.373589  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:15:58.373616  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:15:58.441487  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:15:58.441523  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:00.956209  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:00.969519  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:00.969597  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:01.006091  306360 cri.go:89] found id: ""
	I0407 14:16:01.006123  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.006134  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:01.006142  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:01.006208  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:01.040220  306360 cri.go:89] found id: ""
	I0407 14:16:01.040251  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.040262  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:01.040271  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:01.040341  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:01.075777  306360 cri.go:89] found id: ""
	I0407 14:16:01.075813  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.075824  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:01.075829  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:01.075904  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:01.113161  306360 cri.go:89] found id: ""
	I0407 14:16:01.113188  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.113196  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:01.113202  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:01.113264  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:01.145743  306360 cri.go:89] found id: ""
	I0407 14:16:01.145781  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.145793  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:01.145800  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:01.145891  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:01.180531  306360 cri.go:89] found id: ""
	I0407 14:16:01.180564  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.180576  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:01.180585  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:01.180651  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:01.219646  306360 cri.go:89] found id: ""
	I0407 14:16:01.219679  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.219691  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:01.219699  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:01.219765  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:01.262312  306360 cri.go:89] found id: ""
	I0407 14:16:01.262345  306360 logs.go:282] 0 containers: []
	W0407 14:16:01.262352  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:01.262363  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:01.262377  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:01.339749  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:01.339783  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:01.382985  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:01.383022  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:01.434889  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:01.434921  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:01.451353  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:01.451378  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:01.532064  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:04.032625  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:04.045945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:04.046004  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:04.079093  306360 cri.go:89] found id: ""
	I0407 14:16:04.079123  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.079134  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:04.079143  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:04.079206  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:04.114148  306360 cri.go:89] found id: ""
	I0407 14:16:04.114181  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.114192  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:04.114200  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:04.114270  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:04.152718  306360 cri.go:89] found id: ""
	I0407 14:16:04.152747  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.152758  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:04.152766  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:04.152841  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:04.190031  306360 cri.go:89] found id: ""
	I0407 14:16:04.190065  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.190077  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:04.190085  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:04.190163  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:04.227623  306360 cri.go:89] found id: ""
	I0407 14:16:04.227660  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.227671  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:04.227679  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:04.227747  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:04.268005  306360 cri.go:89] found id: ""
	I0407 14:16:04.268035  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.268047  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:04.268055  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:04.268125  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:04.304340  306360 cri.go:89] found id: ""
	I0407 14:16:04.304364  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.304374  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:04.304381  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:04.304456  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:04.341425  306360 cri.go:89] found id: ""
	I0407 14:16:04.341490  306360 logs.go:282] 0 containers: []
	W0407 14:16:04.341502  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:04.341513  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:04.341526  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:04.398148  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:04.398179  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:04.414586  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:04.414612  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:04.482621  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:04.482650  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:04.482669  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:04.556315  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:04.556359  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:07.115968  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:07.129613  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:07.129672  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:07.167142  306360 cri.go:89] found id: ""
	I0407 14:16:07.167170  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.167180  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:07.167187  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:07.167246  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:07.198691  306360 cri.go:89] found id: ""
	I0407 14:16:07.198723  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.198730  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:07.198736  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:07.198790  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:07.231226  306360 cri.go:89] found id: ""
	I0407 14:16:07.231259  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.231268  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:07.231274  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:07.231326  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:07.263714  306360 cri.go:89] found id: ""
	I0407 14:16:07.263746  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.263757  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:07.263765  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:07.263828  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:07.301046  306360 cri.go:89] found id: ""
	I0407 14:16:07.301079  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.301090  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:07.301098  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:07.301189  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:07.333910  306360 cri.go:89] found id: ""
	I0407 14:16:07.333938  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.333948  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:07.333956  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:07.334023  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:07.366899  306360 cri.go:89] found id: ""
	I0407 14:16:07.366927  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.366937  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:07.366945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:07.367014  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:07.398845  306360 cri.go:89] found id: ""
	I0407 14:16:07.398878  306360 logs.go:282] 0 containers: []
	W0407 14:16:07.398887  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:07.398899  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:07.398912  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:07.411632  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:07.411663  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:07.478836  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:07.478865  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:07.478883  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:07.557802  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:07.557852  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:07.602752  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:07.602785  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:10.155705  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:10.169146  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:10.169232  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:10.202657  306360 cri.go:89] found id: ""
	I0407 14:16:10.202694  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.202702  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:10.202708  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:10.202761  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:10.238239  306360 cri.go:89] found id: ""
	I0407 14:16:10.238272  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.238284  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:10.238292  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:10.238363  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:10.270804  306360 cri.go:89] found id: ""
	I0407 14:16:10.270833  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.270840  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:10.270847  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:10.270897  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:10.319453  306360 cri.go:89] found id: ""
	I0407 14:16:10.319491  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.319502  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:10.319510  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:10.319581  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:10.352622  306360 cri.go:89] found id: ""
	I0407 14:16:10.352654  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.352663  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:10.352670  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:10.352741  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:10.385869  306360 cri.go:89] found id: ""
	I0407 14:16:10.385897  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.385906  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:10.385912  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:10.385979  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:10.420689  306360 cri.go:89] found id: ""
	I0407 14:16:10.420715  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.420724  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:10.420729  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:10.420786  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:10.454182  306360 cri.go:89] found id: ""
	I0407 14:16:10.454210  306360 logs.go:282] 0 containers: []
	W0407 14:16:10.454226  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:10.454238  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:10.454258  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:10.467987  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:10.468021  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:10.535621  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:10.535650  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:10.535663  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:10.613921  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:10.613963  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:10.663267  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:10.663299  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:13.220167  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:13.234197  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:13.234271  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:13.273116  306360 cri.go:89] found id: ""
	I0407 14:16:13.273159  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.273174  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:13.273180  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:13.273236  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:13.309984  306360 cri.go:89] found id: ""
	I0407 14:16:13.310024  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.310036  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:13.310044  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:13.310110  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:13.343107  306360 cri.go:89] found id: ""
	I0407 14:16:13.343145  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.343156  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:13.343162  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:13.343226  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:13.375826  306360 cri.go:89] found id: ""
	I0407 14:16:13.375857  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.375865  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:13.375871  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:13.375934  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:13.408895  306360 cri.go:89] found id: ""
	I0407 14:16:13.408930  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.408940  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:13.408945  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:13.409002  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:13.442272  306360 cri.go:89] found id: ""
	I0407 14:16:13.442309  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.442319  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:13.442329  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:13.442395  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:13.478556  306360 cri.go:89] found id: ""
	I0407 14:16:13.478592  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.478600  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:13.478606  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:13.478671  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:13.512229  306360 cri.go:89] found id: ""
	I0407 14:16:13.512264  306360 logs.go:282] 0 containers: []
	W0407 14:16:13.512274  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:13.512287  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:13.512304  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:13.561858  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:13.561899  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:13.575518  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:13.575549  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:13.638490  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:13.638515  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:13.638528  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:13.714178  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:13.714219  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:16.252354  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:16.265849  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:16.265939  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:16.298742  306360 cri.go:89] found id: ""
	I0407 14:16:16.298774  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.298781  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:16.298788  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:16.298844  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:16.332441  306360 cri.go:89] found id: ""
	I0407 14:16:16.332476  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.332487  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:16.332496  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:16.332563  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:16.365820  306360 cri.go:89] found id: ""
	I0407 14:16:16.365857  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.365868  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:16.365880  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:16.365972  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:16.399094  306360 cri.go:89] found id: ""
	I0407 14:16:16.399125  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.399134  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:16.399140  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:16.399193  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:16.433322  306360 cri.go:89] found id: ""
	I0407 14:16:16.433356  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.433364  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:16.433372  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:16.433428  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:16.466435  306360 cri.go:89] found id: ""
	I0407 14:16:16.466466  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.466476  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:16.466484  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:16.466551  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:16.498858  306360 cri.go:89] found id: ""
	I0407 14:16:16.498887  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.498895  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:16.498900  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:16.498952  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:16.531126  306360 cri.go:89] found id: ""
	I0407 14:16:16.531166  306360 logs.go:282] 0 containers: []
	W0407 14:16:16.531177  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:16.531192  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:16.531206  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:16.610817  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:16.610857  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:16.650145  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:16.650180  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:16.699735  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:16.699821  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:16.719603  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:16.719634  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:16.813399  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:19.315126  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:19.327908  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:19.327993  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:19.361834  306360 cri.go:89] found id: ""
	I0407 14:16:19.361868  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.361877  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:19.361883  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:19.361947  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:19.396519  306360 cri.go:89] found id: ""
	I0407 14:16:19.396554  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.396565  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:19.396573  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:19.396645  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:19.431627  306360 cri.go:89] found id: ""
	I0407 14:16:19.431656  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.431665  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:19.431671  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:19.431741  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:19.465284  306360 cri.go:89] found id: ""
	I0407 14:16:19.465315  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.465323  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:19.465332  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:19.465393  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:19.497940  306360 cri.go:89] found id: ""
	I0407 14:16:19.497970  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.497984  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:19.497991  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:19.498060  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:19.533336  306360 cri.go:89] found id: ""
	I0407 14:16:19.533376  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.533389  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:19.533398  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:19.533469  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:19.568026  306360 cri.go:89] found id: ""
	I0407 14:16:19.568059  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.568076  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:19.568084  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:19.568153  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:19.601780  306360 cri.go:89] found id: ""
	I0407 14:16:19.601835  306360 logs.go:282] 0 containers: []
	W0407 14:16:19.601844  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:19.601854  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:19.601865  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:19.642543  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:19.642574  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:19.692073  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:19.692119  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:19.705748  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:19.705783  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:19.772531  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:19.772556  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:19.772577  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:22.351857  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:22.365447  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:22.365514  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:22.403999  306360 cri.go:89] found id: ""
	I0407 14:16:22.404028  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.404036  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:22.404043  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:22.404094  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:22.441384  306360 cri.go:89] found id: ""
	I0407 14:16:22.441417  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.441426  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:22.441432  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:22.441487  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:22.490577  306360 cri.go:89] found id: ""
	I0407 14:16:22.490610  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.490621  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:22.490628  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:22.490714  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:22.537991  306360 cri.go:89] found id: ""
	I0407 14:16:22.538028  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.538040  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:22.538049  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:22.538120  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:22.584777  306360 cri.go:89] found id: ""
	I0407 14:16:22.584812  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.584824  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:22.584832  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:22.584920  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:22.627558  306360 cri.go:89] found id: ""
	I0407 14:16:22.627588  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.627596  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:22.627602  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:22.627665  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:22.664048  306360 cri.go:89] found id: ""
	I0407 14:16:22.664080  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.664089  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:22.664125  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:22.664180  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:22.697281  306360 cri.go:89] found id: ""
	I0407 14:16:22.697318  306360 logs.go:282] 0 containers: []
	W0407 14:16:22.697329  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:22.697345  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:22.697360  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:22.750380  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:22.750418  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:22.764135  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:22.764163  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:22.830720  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:22.830756  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:22.830775  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:22.910687  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:22.910728  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:25.452699  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:25.466127  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:16:25.466217  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:16:25.503288  306360 cri.go:89] found id: ""
	I0407 14:16:25.503320  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.503329  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:16:25.503335  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:16:25.503395  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:16:25.535855  306360 cri.go:89] found id: ""
	I0407 14:16:25.535891  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.535900  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:16:25.535907  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:16:25.535969  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:16:25.569103  306360 cri.go:89] found id: ""
	I0407 14:16:25.569135  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.569143  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:16:25.569149  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:16:25.569201  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:16:25.604482  306360 cri.go:89] found id: ""
	I0407 14:16:25.604521  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.604533  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:16:25.604542  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:16:25.604600  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:16:25.638915  306360 cri.go:89] found id: ""
	I0407 14:16:25.638948  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.638958  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:16:25.638966  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:16:25.639042  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:16:25.673087  306360 cri.go:89] found id: ""
	I0407 14:16:25.673122  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.673134  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:16:25.673141  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:16:25.673211  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:16:25.706454  306360 cri.go:89] found id: ""
	I0407 14:16:25.706490  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.706502  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:16:25.706511  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:16:25.706596  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:16:25.739824  306360 cri.go:89] found id: ""
	I0407 14:16:25.739861  306360 logs.go:282] 0 containers: []
	W0407 14:16:25.739872  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:16:25.739885  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:16:25.739900  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:16:25.818002  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:16:25.818045  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0407 14:16:25.866681  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:16:25.866715  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:16:25.920791  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:16:25.920824  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:16:25.934838  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:16:25.934870  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:16:26.005417  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:16:28.507450  306360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:16:28.526968  306360 kubeadm.go:597] duration metric: took 4m4.425341549s to restartPrimaryControlPlane
	W0407 14:16:28.527068  306360 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0407 14:16:28.527097  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 14:16:33.604963  306360 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.077840903s)
	I0407 14:16:33.605045  306360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:16:33.619392  306360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 14:16:33.629694  306360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:16:33.639997  306360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:16:33.640021  306360 kubeadm.go:157] found existing configuration files:
	
	I0407 14:16:33.640070  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:16:33.648891  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:16:33.648942  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:16:33.657964  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:16:33.666862  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:16:33.666907  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:16:33.675917  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:16:33.684806  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:16:33.684865  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:16:33.694385  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:16:33.703347  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:16:33.703399  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:16:33.712413  306360 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 14:16:33.785507  306360 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 14:16:33.785591  306360 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 14:16:33.919661  306360 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 14:16:33.919797  306360 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 14:16:33.919913  306360 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 14:16:34.088006  306360 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:16:34.090058  306360 out.go:235]   - Generating certificates and keys ...
	I0407 14:16:34.090179  306360 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 14:16:34.090273  306360 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 14:16:34.090394  306360 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 14:16:34.090467  306360 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 14:16:34.090559  306360 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 14:16:34.090629  306360 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 14:16:34.090692  306360 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 14:16:34.090745  306360 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 14:16:34.090963  306360 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 14:16:34.091371  306360 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 14:16:34.091513  306360 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 14:16:34.091573  306360 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:16:34.250084  306360 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:16:34.456551  306360 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:16:34.600069  306360 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:16:34.730872  306360 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:16:34.745839  306360 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:16:34.748203  306360 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:16:34.748481  306360 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 14:16:34.899583  306360 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:16:34.901383  306360 out.go:235]   - Booting up control plane ...
	I0407 14:16:34.901512  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:16:34.910634  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:16:34.913019  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:16:34.913965  306360 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:16:34.916441  306360 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 14:17:14.918244  306360 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 14:17:14.918361  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:14.918550  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:17:19.918793  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:19.919063  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:17:29.919626  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:29.919857  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:17:49.920620  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:17:49.920914  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:18:29.922713  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:18:29.922989  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:18:29.923024  306360 kubeadm.go:310] 
	I0407 14:18:29.923100  306360 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 14:18:29.923192  306360 kubeadm.go:310] 		timed out waiting for the condition
	I0407 14:18:29.923212  306360 kubeadm.go:310] 
	I0407 14:18:29.923266  306360 kubeadm.go:310] 	This error is likely caused by:
	I0407 14:18:29.923310  306360 kubeadm.go:310] 		- The kubelet is not running
	I0407 14:18:29.923461  306360 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 14:18:29.923472  306360 kubeadm.go:310] 
	I0407 14:18:29.923695  306360 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 14:18:29.923740  306360 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 14:18:29.923826  306360 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 14:18:29.923853  306360 kubeadm.go:310] 
	I0407 14:18:29.924004  306360 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 14:18:29.924126  306360 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 14:18:29.924136  306360 kubeadm.go:310] 
	I0407 14:18:29.924282  306360 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 14:18:29.924392  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 14:18:29.924528  306360 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 14:18:29.924627  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 14:18:29.924654  306360 kubeadm.go:310] 
	I0407 14:18:29.924807  306360 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:18:29.924945  306360 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 14:18:29.925037  306360 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0407 14:18:29.925275  306360 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0407 14:18:29.925332  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0407 14:18:35.351481  306360 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.426121458s)
	I0407 14:18:35.351559  306360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:18:35.365827  306360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:18:35.376549  306360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:18:35.376577  306360 kubeadm.go:157] found existing configuration files:
	
	I0407 14:18:35.376637  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:18:35.386629  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:18:35.386696  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:18:35.397247  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:18:35.406945  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:18:35.407018  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:18:35.416924  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:18:35.426596  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:18:35.426665  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:18:35.436695  306360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:18:35.446316  306360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:18:35.446368  306360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:18:35.455990  306360 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 14:18:35.529786  306360 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0407 14:18:35.529882  306360 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 14:18:35.669860  306360 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 14:18:35.670044  306360 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 14:18:35.670206  306360 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0407 14:18:35.849445  306360 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:18:35.856509  306360 out.go:235]   - Generating certificates and keys ...
	I0407 14:18:35.856606  306360 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 14:18:35.856681  306360 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 14:18:35.856771  306360 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 14:18:35.856853  306360 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0407 14:18:35.856956  306360 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0407 14:18:35.857016  306360 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0407 14:18:35.857075  306360 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0407 14:18:35.857126  306360 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0407 14:18:35.857196  306360 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 14:18:35.857268  306360 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 14:18:35.857304  306360 kubeadm.go:310] [certs] Using the existing "sa" key
	I0407 14:18:35.857357  306360 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:18:35.974809  306360 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:18:36.175364  306360 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:18:36.293266  306360 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:18:36.465625  306360 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:18:36.480525  306360 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:18:36.481848  306360 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:18:36.481922  306360 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 14:18:36.613415  306360 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:18:36.615110  306360 out.go:235]   - Booting up control plane ...
	I0407 14:18:36.615269  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:18:36.628134  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:18:36.629532  306360 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:18:36.630589  306360 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:18:36.634513  306360 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0407 14:19:16.636775  306360 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0407 14:19:16.637057  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:16.637316  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:19:21.638264  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:21.638529  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:19:31.638701  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:31.638962  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:19:51.638889  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:19:51.639128  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:20:31.638384  306360 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0407 14:20:31.638644  306360 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0407 14:20:31.638668  306360 kubeadm.go:310] 
	I0407 14:20:31.638702  306360 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0407 14:20:31.638742  306360 kubeadm.go:310] 		timed out waiting for the condition
	I0407 14:20:31.638748  306360 kubeadm.go:310] 
	I0407 14:20:31.638775  306360 kubeadm.go:310] 	This error is likely caused by:
	I0407 14:20:31.638810  306360 kubeadm.go:310] 		- The kubelet is not running
	I0407 14:20:31.638898  306360 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0407 14:20:31.638904  306360 kubeadm.go:310] 
	I0407 14:20:31.638985  306360 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0407 14:20:31.639023  306360 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0407 14:20:31.639065  306360 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0407 14:20:31.639072  306360 kubeadm.go:310] 
	I0407 14:20:31.639203  306360 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0407 14:20:31.639327  306360 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0407 14:20:31.639358  306360 kubeadm.go:310] 
	I0407 14:20:31.639513  306360 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0407 14:20:31.639633  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0407 14:20:31.639734  306360 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0407 14:20:31.639862  306360 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0407 14:20:31.639875  306360 kubeadm.go:310] 
	I0407 14:20:31.640981  306360 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:20:31.641122  306360 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0407 14:20:31.641237  306360 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0407 14:20:31.641301  306360 kubeadm.go:394] duration metric: took 8m7.609204589s to StartCluster
	I0407 14:20:31.641373  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0407 14:20:31.641452  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0407 14:20:31.685303  306360 cri.go:89] found id: ""
	I0407 14:20:31.685334  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.685345  306360 logs.go:284] No container was found matching "kube-apiserver"
	I0407 14:20:31.685353  306360 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0407 14:20:31.685419  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0407 14:20:31.719244  306360 cri.go:89] found id: ""
	I0407 14:20:31.719274  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.719285  306360 logs.go:284] No container was found matching "etcd"
	I0407 14:20:31.719293  306360 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0407 14:20:31.719367  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0407 14:20:31.753252  306360 cri.go:89] found id: ""
	I0407 14:20:31.753282  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.753292  306360 logs.go:284] No container was found matching "coredns"
	I0407 14:20:31.753299  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0407 14:20:31.753366  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0407 14:20:31.783957  306360 cri.go:89] found id: ""
	I0407 14:20:31.784001  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.784014  306360 logs.go:284] No container was found matching "kube-scheduler"
	I0407 14:20:31.784024  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0407 14:20:31.784113  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0407 14:20:31.819615  306360 cri.go:89] found id: ""
	I0407 14:20:31.819652  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.819660  306360 logs.go:284] No container was found matching "kube-proxy"
	I0407 14:20:31.819666  306360 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0407 14:20:31.819730  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0407 14:20:31.855903  306360 cri.go:89] found id: ""
	I0407 14:20:31.855942  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.855954  306360 logs.go:284] No container was found matching "kube-controller-manager"
	I0407 14:20:31.855962  306360 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0407 14:20:31.856028  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0407 14:20:31.890988  306360 cri.go:89] found id: ""
	I0407 14:20:31.891018  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.891027  306360 logs.go:284] No container was found matching "kindnet"
	I0407 14:20:31.891033  306360 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0407 14:20:31.891086  306360 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0407 14:20:31.924794  306360 cri.go:89] found id: ""
	I0407 14:20:31.924827  306360 logs.go:282] 0 containers: []
	W0407 14:20:31.924837  306360 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0407 14:20:31.924861  306360 logs.go:123] Gathering logs for kubelet ...
	I0407 14:20:31.924876  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0407 14:20:31.972904  306360 logs.go:123] Gathering logs for dmesg ...
	I0407 14:20:31.972948  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0407 14:20:31.988056  306360 logs.go:123] Gathering logs for describe nodes ...
	I0407 14:20:31.988090  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0407 14:20:32.061617  306360 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0407 14:20:32.061657  306360 logs.go:123] Gathering logs for CRI-O ...
	I0407 14:20:32.061672  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0407 14:20:32.165554  306360 logs.go:123] Gathering logs for container status ...
	I0407 14:20:32.165600  306360 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0407 14:20:32.208010  306360 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0407 14:20:32.208080  306360 out.go:270] * 
	W0407 14:20:32.208169  306360 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 14:20:32.208186  306360 out.go:270] * 
	W0407 14:20:32.209134  306360 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 14:20:32.213132  306360 out.go:201] 
	W0407 14:20:32.214433  306360 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0407 14:20:32.214485  306360 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0407 14:20:32.214528  306360 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0407 14:20:32.216101  306360 out.go:201] 
	
	
	==> CRI-O <==
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.323873624Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744036459323840422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6f4b6df-2c0b-4a8e-adac-5579042f33ef name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.324436889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a308d1d1-799c-4c90-ac3c-dd91a2f8b3f2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.324509988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a308d1d1-799c-4c90-ac3c-dd91a2f8b3f2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.324551725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a308d1d1-799c-4c90-ac3c-dd91a2f8b3f2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.356065853Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da6f3610-58ad-4171-a7a7-fe246ceca4c1 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.356190985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da6f3610-58ad-4171-a7a7-fe246ceca4c1 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.357353493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f7c5705-9a8d-4054-accb-a8a69235e209 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.357726088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744036459357707574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f7c5705-9a8d-4054-accb-a8a69235e209 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.358390294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c35a884f-0d3a-4a1d-93dd-0238771b1279 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.358446187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c35a884f-0d3a-4a1d-93dd-0238771b1279 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.358478743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c35a884f-0d3a-4a1d-93dd-0238771b1279 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.391052998Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=019482bb-c0b6-4571-bdbc-b890bb3d5f22 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.391149487Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=019482bb-c0b6-4571-bdbc-b890bb3d5f22 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.392774072Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb639e5e-f8f2-4447-bf5c-d12ed4964a66 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.393238940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744036459393217861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb639e5e-f8f2-4447-bf5c-d12ed4964a66 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.394207250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1e66ba4-acdf-4cfb-97be-3731b0f1247c name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.394259904Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1e66ba4-acdf-4cfb-97be-3731b0f1247c name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.394338958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a1e66ba4-acdf-4cfb-97be-3731b0f1247c name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.430414011Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4691eb9f-ecdd-4c01-85e6-07bf6fa6d0d1 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.430488730Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4691eb9f-ecdd-4c01-85e6-07bf6fa6d0d1 name=/runtime.v1.RuntimeService/Version
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.431874902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b44d914-64ab-45f3-b79a-ed228cd0a386 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.432316170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744036459432290861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b44d914-64ab-45f3-b79a-ed228cd0a386 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.433172744Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba7c43ee-3436-4ac7-ac63-7dbede9e4642 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.433231658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba7c43ee-3436-4ac7-ac63-7dbede9e4642 name=/runtime.v1.RuntimeService/ListContainers
	Apr 07 14:34:19 old-k8s-version-405646 crio[630]: time="2025-04-07 14:34:19.433270802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ba7c43ee-3436-4ac7-ac63-7dbede9e4642 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 7 14:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053325] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041250] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr 7 14:12] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.811633] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.641709] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.228522] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.053600] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065282] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.177286] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.157141] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.250668] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +7.115917] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.069863] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.742427] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +13.578914] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 7 14:16] systemd-fstab-generator[5079]: Ignoring "noauto" option for root device
	[Apr 7 14:18] systemd-fstab-generator[5365]: Ignoring "noauto" option for root device
	[  +0.067537] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:34:19 up 22 min,  0 users,  load average: 0.00, 0.06, 0.03
	Linux old-k8s-version-405646 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b7e760, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0008d0d50, 0x24, 0x60, 0x7f30fce26de8, 0x118, ...)
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]: net/http.(*Transport).dial(0xc0008b8000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0008d0d50, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]: net/http.(*Transport).dialConn(0xc0008b8000, 0x4f7fe00, 0xc000052030, 0x0, 0xc000464480, 0x5, 0xc0008d0d50, 0x24, 0x0, 0xc000b797a0, ...)
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]: net/http.(*Transport).dialConnFor(0xc0008b8000, 0xc000be0420)
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]: created by net/http.(*Transport).queueForDial
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]: goroutine 174 [select]:
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c7f8c0, 0xc000bbc780, 0xc000ce8780, 0xc000ce8720)
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]: created by net.(*netFD).connect
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7094]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Apr 07 14:34:14 old-k8s-version-405646 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 07 14:34:14 old-k8s-version-405646 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 07 14:34:14 old-k8s-version-405646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 163.
	Apr 07 14:34:14 old-k8s-version-405646 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 07 14:34:14 old-k8s-version-405646 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7104]: I0407 14:34:14.798442    7104 server.go:416] Version: v1.20.0
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7104]: I0407 14:34:14.799183    7104 server.go:837] Client rotation is on, will bootstrap in background
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7104]: I0407 14:34:14.802080    7104 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7104]: I0407 14:34:14.804028    7104 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 07 14:34:14 old-k8s-version-405646 kubelet[7104]: W0407 14:34:14.804226    7104 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405646 -n old-k8s-version-405646
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 2 (229.003246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-405646" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (284.55s)

                                                
                                    

Test pass (271/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.28
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.2/json-events 4.75
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.14
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.61
22 TestOffline 85.29
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 136.57
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 11.51
35 TestAddons/parallel/Registry 16.73
37 TestAddons/parallel/InspektorGadget 12.01
38 TestAddons/parallel/MetricsServer 5.76
40 TestAddons/parallel/CSI 61.84
41 TestAddons/parallel/Headlamp 23.47
42 TestAddons/parallel/CloudSpanner 5.59
43 TestAddons/parallel/LocalPath 16.16
44 TestAddons/parallel/NvidiaDevicePlugin 6.85
45 TestAddons/parallel/Yakd 12.02
47 TestAddons/StoppedEnableDisable 91.26
48 TestCertOptions 91.31
49 TestCertExpiration 312.94
51 TestForceSystemdFlag 73.94
52 TestForceSystemdEnv 46.66
54 TestKVMDriverInstallOrUpdate 4.49
58 TestErrorSpam/setup 43.64
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.75
61 TestErrorSpam/pause 1.65
62 TestErrorSpam/unpause 1.68
63 TestErrorSpam/stop 5.88
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 55.81
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 46.08
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.21
75 TestFunctional/serial/CacheCmd/cache/add_local 1.95
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 34.16
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.46
87 TestFunctional/serial/InvalidService 3.75
89 TestFunctional/parallel/ConfigCmd 0.34
90 TestFunctional/parallel/DashboardCmd 10.86
91 TestFunctional/parallel/DryRun 0.27
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 0.77
97 TestFunctional/parallel/ServiceCmdConnect 22.53
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 45.13
101 TestFunctional/parallel/SSHCmd 0.42
102 TestFunctional/parallel/CpCmd 1.28
103 TestFunctional/parallel/MySQL 22.59
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.31
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
113 TestFunctional/parallel/License 0.18
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.75
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
129 TestFunctional/parallel/ImageCommands/ImageBuild 6.96
130 TestFunctional/parallel/ImageCommands/Setup 1.53
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
136 TestFunctional/parallel/ProfileCmd/profile_list 0.33
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.3
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.81
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.94
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.93
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.11
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.69
144 TestFunctional/parallel/ServiceCmd/DeployApp 7.16
145 TestFunctional/parallel/MountCmd/any-port 9.66
146 TestFunctional/parallel/ServiceCmd/List 0.44
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
149 TestFunctional/parallel/ServiceCmd/Format 0.3
150 TestFunctional/parallel/ServiceCmd/URL 0.38
151 TestFunctional/parallel/MountCmd/specific-port 1.97
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.3
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 193.73
161 TestMultiControlPlane/serial/DeployApp 8.85
162 TestMultiControlPlane/serial/PingHostFromPods 1.24
163 TestMultiControlPlane/serial/AddWorkerNode 59.57
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
166 TestMultiControlPlane/serial/CopyFile 13.03
167 TestMultiControlPlane/serial/StopSecondaryNode 91.66
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
169 TestMultiControlPlane/serial/RestartSecondaryNode 52.78
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 436.91
172 TestMultiControlPlane/serial/DeleteSecondaryNode 19.17
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
174 TestMultiControlPlane/serial/StopCluster 272.76
175 TestMultiControlPlane/serial/RestartCluster 128.94
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
177 TestMultiControlPlane/serial/AddSecondaryNode 77.65
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
182 TestJSONOutput/start/Command 87.5
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.75
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.63
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.35
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.2
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 91.53
214 TestMountStart/serial/StartWithMountFirst 31.42
215 TestMountStart/serial/VerifyMountFirst 0.39
216 TestMountStart/serial/StartWithMountSecond 30.49
217 TestMountStart/serial/VerifyMountSecond 0.38
218 TestMountStart/serial/DeleteFirst 0.9
219 TestMountStart/serial/VerifyMountPostDelete 0.38
220 TestMountStart/serial/Stop 1.28
221 TestMountStart/serial/RestartStopped 21.73
222 TestMountStart/serial/VerifyMountPostStop 0.39
225 TestMultiNode/serial/FreshStart2Nodes 116.88
226 TestMultiNode/serial/DeployApp2Nodes 6.56
227 TestMultiNode/serial/PingHostFrom2Pods 0.8
228 TestMultiNode/serial/AddNode 50.02
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.59
231 TestMultiNode/serial/CopyFile 7.31
232 TestMultiNode/serial/StopNode 2.33
233 TestMultiNode/serial/StartAfterStop 39.68
234 TestMultiNode/serial/RestartKeepsNodes 343.48
235 TestMultiNode/serial/DeleteNode 2.6
236 TestMultiNode/serial/StopMultiNode 181.67
237 TestMultiNode/serial/RestartMultiNode 154.68
238 TestMultiNode/serial/ValidateNameConflict 44.5
245 TestScheduledStopUnix 115.76
249 TestRunningBinaryUpgrade 228.33
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
258 TestNoKubernetes/serial/StartWithK8s 101.12
263 TestNetworkPlugins/group/false 3.12
267 TestStoppedBinaryUpgrade/Setup 0.29
268 TestStoppedBinaryUpgrade/Upgrade 151.18
269 TestNoKubernetes/serial/StartWithStopK8s 65.18
270 TestNoKubernetes/serial/Start 28.93
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
272 TestNoKubernetes/serial/ProfileList 30.36
273 TestNoKubernetes/serial/Stop 1.36
274 TestNoKubernetes/serial/StartNoArgs 23.42
275 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
285 TestPause/serial/Start 89.84
286 TestNetworkPlugins/group/auto/Start 90.24
288 TestNetworkPlugins/group/auto/KubeletFlags 0.27
289 TestNetworkPlugins/group/auto/NetCatPod 13.31
290 TestNetworkPlugins/group/flannel/Start 72.4
291 TestNetworkPlugins/group/auto/DNS 0.16
292 TestNetworkPlugins/group/auto/Localhost 0.13
293 TestNetworkPlugins/group/auto/HairPin 0.13
294 TestNetworkPlugins/group/enable-default-cni/Start 59.94
295 TestNetworkPlugins/group/bridge/Start 82.04
296 TestNetworkPlugins/group/flannel/ControllerPod 6.01
297 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
298 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.24
299 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
300 TestNetworkPlugins/group/flannel/NetCatPod 12.7
301 TestNetworkPlugins/group/enable-default-cni/DNS 16.34
302 TestNetworkPlugins/group/flannel/DNS 0.17
303 TestNetworkPlugins/group/flannel/Localhost 0.15
304 TestNetworkPlugins/group/flannel/HairPin 0.14
305 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
306 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
307 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
308 TestNetworkPlugins/group/bridge/NetCatPod 11.29
309 TestNetworkPlugins/group/calico/Start 82.09
310 TestNetworkPlugins/group/kindnet/Start 87.26
311 TestNetworkPlugins/group/bridge/DNS 0.17
312 TestNetworkPlugins/group/bridge/Localhost 0.15
313 TestNetworkPlugins/group/bridge/HairPin 0.15
314 TestNetworkPlugins/group/custom-flannel/Start 113.09
317 TestNetworkPlugins/group/calico/ControllerPod 6.01
318 TestNetworkPlugins/group/calico/KubeletFlags 0.23
319 TestNetworkPlugins/group/calico/NetCatPod 12.28
320 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
321 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
322 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
323 TestNetworkPlugins/group/calico/DNS 0.17
324 TestNetworkPlugins/group/calico/Localhost 0.13
325 TestNetworkPlugins/group/calico/HairPin 0.13
326 TestNetworkPlugins/group/kindnet/DNS 0.22
327 TestNetworkPlugins/group/kindnet/Localhost 0.18
328 TestNetworkPlugins/group/kindnet/HairPin 0.15
330 TestStartStop/group/no-preload/serial/FirstStart 72.11
331 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
333 TestStartStop/group/embed-certs/serial/FirstStart 83.86
334 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.26
335 TestNetworkPlugins/group/custom-flannel/DNS 0.14
336 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
337 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 114.86
340 TestStartStop/group/no-preload/serial/DeployApp 10.31
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
342 TestStartStop/group/no-preload/serial/Stop 91
343 TestStartStop/group/embed-certs/serial/DeployApp 11.37
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
345 TestStartStop/group/embed-certs/serial/Stop 91.02
346 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.44
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
350 TestStartStop/group/no-preload/serial/SecondStart 349.59
351 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
352 TestStartStop/group/embed-certs/serial/SecondStart 300.55
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
356 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 301.87
357 TestStartStop/group/old-k8s-version/serial/Stop 3.3
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
362 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
363 TestStartStop/group/embed-certs/serial/Pause 2.91
365 TestStartStop/group/newest-cni/serial/FirstStart 47.59
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9
367 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
368 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
369 TestStartStop/group/no-preload/serial/Pause 3.15
370 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
372 TestStartStop/group/newest-cni/serial/DeployApp 0
373 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.25
374 TestStartStop/group/newest-cni/serial/Stop 11.29
375 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
376 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.5
377 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
378 TestStartStop/group/newest-cni/serial/SecondStart 34.99
379 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
382 TestStartStop/group/newest-cni/serial/Pause 2.39
x
+
TestDownloadOnly/v1.20.0/json-events (7.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-378763 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-378763 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.279653713s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0407 12:55:23.928489  249516 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0407 12:55:23.928594  249516 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-378763
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-378763: exit status 85 (59.786248ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-378763 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC |          |
	|         | -p download-only-378763        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:55:16
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:55:16.691356  249528 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:55:16.691623  249528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:55:16.691633  249528 out.go:358] Setting ErrFile to fd 2...
	I0407 12:55:16.691637  249528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:55:16.691849  249528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	W0407 12:55:16.691967  249528 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20598-242355/.minikube/config/config.json: open /home/jenkins/minikube-integration/20598-242355/.minikube/config/config.json: no such file or directory
	I0407 12:55:16.692593  249528 out.go:352] Setting JSON to true
	I0407 12:55:16.693468  249528 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":16664,"bootTime":1744013853,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:55:16.693577  249528 start.go:139] virtualization: kvm guest
	I0407 12:55:16.695950  249528 out.go:97] [download-only-378763] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0407 12:55:16.696059  249528 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 12:55:16.696096  249528 notify.go:220] Checking for updates...
	I0407 12:55:16.697272  249528 out.go:169] MINIKUBE_LOCATION=20598
	I0407 12:55:16.698638  249528 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:55:16.699924  249528 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 12:55:16.701314  249528 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 12:55:16.702607  249528 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0407 12:55:16.704986  249528 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:55:16.705219  249528 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:55:16.742911  249528 out.go:97] Using the kvm2 driver based on user configuration
	I0407 12:55:16.742951  249528 start.go:297] selected driver: kvm2
	I0407 12:55:16.742957  249528 start.go:901] validating driver "kvm2" against <nil>
	I0407 12:55:16.743318  249528 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:55:16.743411  249528 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20598-242355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 12:55:16.759205  249528 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 12:55:16.759257  249528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:55:16.759787  249528 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0407 12:55:16.759958  249528 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:55:16.760000  249528 cni.go:84] Creating CNI manager for ""
	I0407 12:55:16.760059  249528 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0407 12:55:16.760073  249528 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 12:55:16.760138  249528 start.go:340] cluster config:
	{Name:download-only-378763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-378763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:55:16.760322  249528 iso.go:125] acquiring lock: {Name:mk6d72e1b2a59d3c4dd958601dac3ffc7df02d9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:55:16.762039  249528 out.go:97] Downloading VM boot image ...
	I0407 12:55:16.762075  249528 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20598-242355/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 12:55:19.339962  249528 out.go:97] Starting "download-only-378763" primary control-plane node in "download-only-378763" cluster
	I0407 12:55:19.339998  249528 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 12:55:19.368191  249528 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0407 12:55:19.368225  249528 cache.go:56] Caching tarball of preloaded images
	I0407 12:55:19.368397  249528 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0407 12:55:19.370276  249528 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0407 12:55:19.370297  249528 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0407 12:55:19.399637  249528 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-378763 host does not exist
	  To start a cluster, run: "minikube start -p download-only-378763"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-378763
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (4.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-084066 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-084066 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.7538915s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (4.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0407 12:55:29.003399  249516 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0407 12:55:29.003445  249516 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-242355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-084066
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-084066: exit status 85 (59.486953ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-378763 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC |                     |
	|         | -p download-only-378763        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC | 07 Apr 25 12:55 UTC |
	| delete  | -p download-only-378763        | download-only-378763 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC | 07 Apr 25 12:55 UTC |
	| start   | -o=json --download-only        | download-only-084066 | jenkins | v1.35.0 | 07 Apr 25 12:55 UTC |                     |
	|         | -p download-only-084066        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:55:24
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:55:24.289864  249728 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:55:24.289963  249728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:55:24.289970  249728 out.go:358] Setting ErrFile to fd 2...
	I0407 12:55:24.289976  249728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:55:24.290192  249728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 12:55:24.290769  249728 out.go:352] Setting JSON to true
	I0407 12:55:24.291700  249728 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":16671,"bootTime":1744013853,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:55:24.291815  249728 start.go:139] virtualization: kvm guest
	I0407 12:55:24.293573  249728 out.go:97] [download-only-084066] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:55:24.293722  249728 notify.go:220] Checking for updates...
	I0407 12:55:24.294758  249728 out.go:169] MINIKUBE_LOCATION=20598
	I0407 12:55:24.295775  249728 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:55:24.297077  249728 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 12:55:24.298221  249728 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 12:55:24.299517  249728 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-084066 host does not exist
	  To start a cluster, run: "minikube start -p download-only-084066"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-084066
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0407 12:55:29.577704  249516 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-206431 --alsologtostderr --binary-mirror http://127.0.0.1:37115 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-206431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-206431
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (85.29s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-793502 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-793502 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m24.105600048s)
helpers_test.go:175: Cleaning up "offline-crio-793502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-793502
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-793502: (1.181659751s)
--- PASS: TestOffline (85.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-735249
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-735249: exit status 85 (51.692441ms)

                                                
                                                
-- stdout --
	* Profile "addons-735249" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-735249"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-735249
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-735249: exit status 85 (51.168165ms)

                                                
                                                
-- stdout --
	* Profile "addons-735249" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-735249"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (136.57s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-735249 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-735249 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m16.571696281s)
--- PASS: TestAddons/Setup (136.57s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-735249 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-735249 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-735249 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-735249 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d8ef8c44-629b-404a-b551-3d7bcccc3d86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d8ef8c44-629b-404a-b551-3d7bcccc3d86] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.008180482s
addons_test.go:633: (dbg) Run:  kubectl --context addons-735249 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-735249 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-735249 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 10.079712ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-vz6qp" [3cb295f5-143a-4936-a222-d574355c2a0d] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003282393s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bcl68" [16a879e9-1f5d-4e62-846d-b9dbbbf00755] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004856778s
addons_test.go:331: (dbg) Run:  kubectl --context addons-735249 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-735249 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-735249 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.876282843s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 ip
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mf7tv" [88d4433e-cc19-4876-8e21-d97095d847b3] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.010812003s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-735249 addons disable inspektor-gadget --alsologtostderr -v=1: (5.993296477s)
--- PASS: TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 10.457134ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-w467q" [cfd66a0f-4642-418e-8488-660d68bd0187] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0047811s
addons_test.go:402: (dbg) Run:  kubectl --context addons-735249 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0407 12:58:13.975050  249516 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0407 12:58:13.985295  249516 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0407 12:58:13.985326  249516 kapi.go:107] duration metric: took 10.303844ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 10.317165ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-735249 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/04/07 12:58:23 [DEBUG] GET http://192.168.39.136:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-735249 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [baef6677-b713-43de-a562-4cf1a5966e72] Pending
helpers_test.go:344: "task-pv-pod" [baef6677-b713-43de-a562-4cf1a5966e72] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [baef6677-b713-43de-a562-4cf1a5966e72] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.003593686s
addons_test.go:511: (dbg) Run:  kubectl --context addons-735249 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-735249 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-735249 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-735249 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-735249 delete pod task-pv-pod: (1.124853216s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-735249 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-735249 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-735249 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [45718bf9-906a-48d1-b30d-97942120a1db] Pending
helpers_test.go:344: "task-pv-pod-restore" [45718bf9-906a-48d1-b30d-97942120a1db] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003544327s
addons_test.go:553: (dbg) Run:  kubectl --context addons-735249 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-735249 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-735249 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-735249 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.807641594s)
--- PASS: TestAddons/parallel/CSI (61.84s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-735249 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-t6d67" [a69beadf-9092-442e-8cfc-e8acb823e008] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-t6d67" [a69beadf-9092-442e-8cfc-e8acb823e008] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.003258022s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-735249 addons disable headlamp --alsologtostderr -v=1: (6.478016084s)
--- PASS: TestAddons/parallel/Headlamp (23.47s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-dsl6d" [d1cc4bae-e847-413e-be84-e050a6eba4c1] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002997921s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (16.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-735249 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-735249 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-735249 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c17cb5cf-9321-4d05-a37b-6ec45b652c7f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c17cb5cf-9321-4d05-a37b-6ec45b652c7f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c17cb5cf-9321-4d05-a37b-6ec45b652c7f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.00372895s
addons_test.go:906: (dbg) Run:  kubectl --context addons-735249 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 ssh "cat /opt/local-path-provisioner/pvc-907ff389-84bc-49da-96de-e62e4981b23c_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-735249 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-735249 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (16.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7zt67" [dba13539-6120-49a5-8bee-1dccb5579bec] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003126769s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-qjbdn" [201329ba-2eec-4ab5-bc95-77793ab153b8] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003501098s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-735249 addons disable yakd --alsologtostderr -v=1: (6.019013631s)
--- PASS: TestAddons/parallel/Yakd (12.02s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-735249
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-735249: (1m30.972593805s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-735249
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-735249
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-735249
--- PASS: TestAddons/StoppedEnableDisable (91.26s)

                                                
                                    
x
+
TestCertOptions (91.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-574980 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-574980 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m30.056480108s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-574980 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-574980 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-574980 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-574980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-574980
--- PASS: TestCertOptions (91.31s)

                                                
                                    
x
+
TestCertExpiration (312.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-837665 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-837665 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (50.723748227s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-837665 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0407 14:02:47.433796  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-837665 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m21.03651754s)
helpers_test.go:175: Cleaning up "cert-expiration-837665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-837665
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-837665: (1.175952967s)
--- PASS: TestCertExpiration (312.94s)

                                                
                                    
x
+
TestForceSystemdFlag (73.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-939490 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-939490 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m12.700307678s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-939490 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-939490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-939490
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-939490: (1.031451859s)
--- PASS: TestForceSystemdFlag (73.94s)

                                                
                                    
x
+
TestForceSystemdEnv (46.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-840043 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-840043 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.865401716s)
helpers_test.go:175: Cleaning up "force-systemd-env-840043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-840043
--- PASS: TestForceSystemdEnv (46.66s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.49s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0407 13:54:55.909731  249516 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:54:55.909914  249516 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0407 13:54:55.939876  249516 install.go:62] docker-machine-driver-kvm2: exit status 1
W0407 13:54:55.940072  249516 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0407 13:54:55.940147  249516 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate973895946/001/docker-machine-driver-kvm2
I0407 13:54:56.195414  249516 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate973895946/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc00068f0d8 gz:0xc00068f180 tar:0xc00068f110 tar.bz2:0xc00068f130 tar.gz:0xc00068f140 tar.xz:0xc00068f160 tar.zst:0xc00068f170 tbz2:0xc00068f130 tgz:0xc00068f140 txz:0xc00068f160 tzst:0xc00068f170 xz:0xc00068f188 zip:0xc00068f190 zst:0xc00068f1a0] Getters:map[file:0xc00084fdd0 http:0xc000594fa0 https:0xc000594ff0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0407 13:54:56.195481  249516 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate973895946/001/docker-machine-driver-kvm2
I0407 13:54:58.628095  249516 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:54:58.628241  249516 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0407 13:54:58.663159  249516 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0407 13:54:58.663193  249516 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0407 13:54:58.663256  249516 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0407 13:54:58.663292  249516 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate973895946/002/docker-machine-driver-kvm2
I0407 13:54:58.715411  249516 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate973895946/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc00068f0d8 gz:0xc00068f180 tar:0xc00068f110 tar.bz2:0xc00068f130 tar.gz:0xc00068f140 tar.xz:0xc00068f160 tar.zst:0xc00068f170 tbz2:0xc00068f130 tgz:0xc00068f140 txz:0xc00068f160 tzst:0xc00068f170 xz:0xc00068f188 zip:0xc00068f190 zst:0xc00068f1a0] Getters:map[file:0xc000ae46f0 http:0xc001e402d0 https:0xc001e40320] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0407 13:54:58.715458  249516 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate973895946/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.49s)

                                                
                                    
x
+
TestErrorSpam/setup (43.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-562304 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-562304 --driver=kvm2  --container-runtime=crio
E0407 13:02:47.441082  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:47.447427  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:47.458797  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:47.480134  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:47.521493  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:47.602965  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:47.764492  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:48.086187  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:48.728286  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:50.009947  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:52.572861  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:57.694394  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:03:07.936184  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-562304 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-562304 --driver=kvm2  --container-runtime=crio: (43.639256558s)
--- PASS: TestErrorSpam/setup (43.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (5.88s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 stop: (2.302169673s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 stop: (1.548977746s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 stop
E0407 13:03:28.418433  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-562304 --log_dir /tmp/nospam-562304 stop: (2.029370318s)
--- PASS: TestErrorSpam/stop (5.88s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20598-242355/.minikube/files/etc/test/nested/copy/249516/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-709179 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0407 13:04:09.381654  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-709179 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.810158507s)
--- PASS: TestFunctional/serial/StartWithProxy (55.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (46.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0407 13:04:25.319889  249516 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-709179 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-709179 --alsologtostderr -v=8: (46.079297775s)
functional_test.go:680: soft start took 46.079994363s for "functional-709179" cluster.
I0407 13:05:11.399502  249516 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (46.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-709179 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-709179 cache add registry.k8s.io/pause:3.1: (1.019079646s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-709179 cache add registry.k8s.io/pause:3.3: (1.10751036s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-709179 cache add registry.k8s.io/pause:latest: (1.08611201s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-709179 /tmp/TestFunctionalserialCacheCmdcacheadd_local987403835/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 cache add minikube-local-cache-test:functional-709179
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-709179 cache add minikube-local-cache-test:functional-709179: (1.636183239s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 cache delete minikube-local-cache-test:functional-709179
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-709179
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709179 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (214.151401ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-709179 cache reload: (1.008805897s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 kubectl -- --context functional-709179 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-709179 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-709179 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0407 13:05:31.304701  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-709179 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.161571688s)
functional_test.go:778: restart took 34.161693418s for "functional-709179" cluster.
I0407 13:05:53.172497  249516 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (34.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-709179 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-709179 logs: (1.462781913s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 logs --file /tmp/TestFunctionalserialLogsFileCmd3016469097/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-709179 logs --file /tmp/TestFunctionalserialLogsFileCmd3016469097/001/logs.txt: (1.463458948s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-709179 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-709179
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-709179: exit status 115 (272.73673ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.37:31299 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-709179 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709179 config get cpus: exit status 14 (52.170605ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709179 config get cpus: exit status 14 (50.202471ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-709179 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-709179 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 257779: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-709179 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-709179 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (131.938269ms)

                                                
                                                
-- stdout --
	* [functional-709179] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:06:26.260609  257513 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:06:26.260701  257513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:06:26.260711  257513 out.go:358] Setting ErrFile to fd 2...
	I0407 13:06:26.260717  257513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:06:26.260909  257513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 13:06:26.261481  257513 out.go:352] Setting JSON to false
	I0407 13:06:26.262390  257513 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":17333,"bootTime":1744013853,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:06:26.262447  257513 start.go:139] virtualization: kvm guest
	I0407 13:06:26.264405  257513 out.go:177] * [functional-709179] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:06:26.265738  257513 notify.go:220] Checking for updates...
	I0407 13:06:26.265805  257513 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:06:26.267079  257513 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:06:26.268464  257513 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 13:06:26.270261  257513 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 13:06:26.271553  257513 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:06:26.272877  257513 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:06:26.274400  257513 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:06:26.274812  257513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:06:26.274868  257513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:06:26.290135  257513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40187
	I0407 13:06:26.290730  257513 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:06:26.291304  257513 main.go:141] libmachine: Using API Version  1
	I0407 13:06:26.291328  257513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:06:26.291747  257513 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:06:26.292005  257513 main.go:141] libmachine: (functional-709179) Calling .DriverName
	I0407 13:06:26.292321  257513 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:06:26.292833  257513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:06:26.292889  257513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:06:26.307799  257513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38103
	I0407 13:06:26.308301  257513 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:06:26.308780  257513 main.go:141] libmachine: Using API Version  1
	I0407 13:06:26.308806  257513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:06:26.309162  257513 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:06:26.309362  257513 main.go:141] libmachine: (functional-709179) Calling .DriverName
	I0407 13:06:26.342831  257513 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 13:06:26.344152  257513 start.go:297] selected driver: kvm2
	I0407 13:06:26.344168  257513 start.go:901] validating driver "kvm2" against &{Name:functional-709179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-709179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:06:26.344294  257513 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:06:26.346675  257513 out.go:201] 
	W0407 13:06:26.347820  257513 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0407 13:06:26.348848  257513 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-709179 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-709179 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-709179 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (208.663736ms)

                                                
                                                
-- stdout --
	* [functional-709179] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:06:23.016786  257162 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:06:23.016896  257162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:06:23.016905  257162 out.go:358] Setting ErrFile to fd 2...
	I0407 13:06:23.016909  257162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:06:23.017163  257162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 13:06:23.017743  257162 out.go:352] Setting JSON to false
	I0407 13:06:23.018644  257162 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":17330,"bootTime":1744013853,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:06:23.018746  257162 start.go:139] virtualization: kvm guest
	I0407 13:06:23.084024  257162 out.go:177] * [functional-709179] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0407 13:06:23.088526  257162 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:06:23.088533  257162 notify.go:220] Checking for updates...
	I0407 13:06:23.092726  257162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:06:23.093907  257162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 13:06:23.095200  257162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 13:06:23.096550  257162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:06:23.098173  257162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:06:23.100072  257162 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:06:23.100581  257162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:06:23.100661  257162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:06:23.116495  257162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37569
	I0407 13:06:23.116990  257162 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:06:23.117624  257162 main.go:141] libmachine: Using API Version  1
	I0407 13:06:23.117652  257162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:06:23.118048  257162 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:06:23.118281  257162 main.go:141] libmachine: (functional-709179) Calling .DriverName
	I0407 13:06:23.118566  257162 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:06:23.118952  257162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:06:23.118997  257162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:06:23.133726  257162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I0407 13:06:23.134268  257162 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:06:23.134807  257162 main.go:141] libmachine: Using API Version  1
	I0407 13:06:23.134823  257162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:06:23.135157  257162 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:06:23.135350  257162 main.go:141] libmachine: (functional-709179) Calling .DriverName
	I0407 13:06:23.168595  257162 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0407 13:06:23.169844  257162 start.go:297] selected driver: kvm2
	I0407 13:06:23.169860  257162 start.go:901] validating driver "kvm2" against &{Name:functional-709179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-709179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:06:23.170025  257162 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:06:23.172482  257162 out.go:201] 
	W0407 13:06:23.173773  257162 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0407 13:06:23.174941  257162 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (22.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-709179 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-709179 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-2cm7j" [46f4ee1e-2c78-4300-bd58-79781228d7f5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-2cm7j" [46f4ee1e-2c78-4300-bd58-79781228d7f5] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 22.004172885s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.37:31460
functional_test.go:1692: http://192.168.39.37:31460: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-2cm7j

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.37:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.37:31460
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (22.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [75cb9b80-0838-4c07-b3c0-3bcc416b32ff] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.161300953s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-709179 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-709179 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-709179 get pvc myclaim -o=json
I0407 13:06:08.348650  249516 retry.go:31] will retry after 2.019729419s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:52d65697-25c8-4613-9e76-26010b47974f ResourceVersion:680 Generation:0 CreationTimestamp:2025-04-07 13:06:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-52d65697-25c8-4613-9e76-26010b47974f StorageClassName:0xc001dcadf0 VolumeMode:0xc001dcae00 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-709179 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-709179 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [059924e1-63ee-46c4-9cc3-c64294f4c290] Pending
helpers_test.go:344: "sp-pod" [059924e1-63ee-46c4-9cc3-c64294f4c290] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [059924e1-63ee-46c4-9cc3-c64294f4c290] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.003396528s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-709179 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-709179 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-709179 delete -f testdata/storage-provisioner/pod.yaml: (1.514703035s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-709179 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8079a206-9605-49a4-88f4-db75de6e9963] Pending
helpers_test.go:344: "sp-pod" [8079a206-9605-49a4-88f4-db75de6e9963] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8079a206-9605-49a4-88f4-db75de6e9963] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003494481s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-709179 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh -n functional-709179 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 cp functional-709179:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1658169367/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh -n functional-709179 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh -n functional-709179 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-709179 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-kj2ln" [1459ef21-6582-473a-989d-287ff320d247] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-kj2ln" [1459ef21-6582-473a-989d-287ff320d247] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.006864206s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-709179 exec mysql-58ccfd96bb-kj2ln -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-709179 exec mysql-58ccfd96bb-kj2ln -- mysql -ppassword -e "show databases;": exit status 1 (617.369209ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 13:06:20.210465  249516 retry.go:31] will retry after 1.118316502s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-709179 exec mysql-58ccfd96bb-kj2ln -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-709179 exec mysql-58ccfd96bb-kj2ln -- mysql -ppassword -e "show databases;": exit status 1 (221.587552ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 13:06:21.551100  249516 retry.go:31] will retry after 1.29507572s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-709179 exec mysql-58ccfd96bb-kj2ln -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/249516/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "sudo cat /etc/test/nested/copy/249516/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/249516.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "sudo cat /etc/ssl/certs/249516.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/249516.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "sudo cat /usr/share/ca-certificates/249516.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/2495162.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "sudo cat /etc/ssl/certs/2495162.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/2495162.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "sudo cat /usr/share/ca-certificates/2495162.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-709179 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709179 ssh "sudo systemctl is-active docker": exit status 1 (242.572416ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709179 ssh "sudo systemctl is-active containerd": exit status 1 (223.978625ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-709179 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-709179
localhost/kicbase/echo-server:functional-709179
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-709179 image ls --format short --alsologtostderr:
I0407 13:06:28.919255  257859 out.go:345] Setting OutFile to fd 1 ...
I0407 13:06:28.919482  257859 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:06:28.919491  257859 out.go:358] Setting ErrFile to fd 2...
I0407 13:06:28.919496  257859 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:06:28.919664  257859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
I0407 13:06:28.920355  257859 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:06:28.920506  257859 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:06:28.920864  257859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 13:06:28.920935  257859 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 13:06:28.937351  257859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42619
I0407 13:06:28.937860  257859 main.go:141] libmachine: () Calling .GetVersion
I0407 13:06:28.938434  257859 main.go:141] libmachine: Using API Version  1
I0407 13:06:28.938460  257859 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 13:06:28.938851  257859 main.go:141] libmachine: () Calling .GetMachineName
I0407 13:06:28.939018  257859 main.go:141] libmachine: (functional-709179) Calling .GetState
I0407 13:06:28.940988  257859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 13:06:28.941048  257859 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 13:06:28.957181  257859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
I0407 13:06:28.957631  257859 main.go:141] libmachine: () Calling .GetVersion
I0407 13:06:28.958156  257859 main.go:141] libmachine: Using API Version  1
I0407 13:06:28.958176  257859 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 13:06:28.958638  257859 main.go:141] libmachine: () Calling .GetMachineName
I0407 13:06:28.958846  257859 main.go:141] libmachine: (functional-709179) Calling .DriverName
I0407 13:06:28.959123  257859 ssh_runner.go:195] Run: systemctl --version
I0407 13:06:28.959162  257859 main.go:141] libmachine: (functional-709179) Calling .GetSSHHostname
I0407 13:06:28.961944  257859 main.go:141] libmachine: (functional-709179) DBG | domain functional-709179 has defined MAC address 52:54:00:8f:9d:23 in network mk-functional-709179
I0407 13:06:28.962356  257859 main.go:141] libmachine: (functional-709179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:9d:23", ip: ""} in network mk-functional-709179: {Iface:virbr1 ExpiryTime:2025-04-07 14:03:44 +0000 UTC Type:0 Mac:52:54:00:8f:9d:23 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:functional-709179 Clientid:01:52:54:00:8f:9d:23}
I0407 13:06:28.962388  257859 main.go:141] libmachine: (functional-709179) DBG | domain functional-709179 has defined IP address 192.168.39.37 and MAC address 52:54:00:8f:9d:23 in network mk-functional-709179
I0407 13:06:28.962502  257859 main.go:141] libmachine: (functional-709179) Calling .GetSSHPort
I0407 13:06:28.962664  257859 main.go:141] libmachine: (functional-709179) Calling .GetSSHKeyPath
I0407 13:06:28.962838  257859 main.go:141] libmachine: (functional-709179) Calling .GetSSHUsername
I0407 13:06:28.962958  257859 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/functional-709179/id_rsa Username:docker}
I0407 13:06:29.066399  257859 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 13:06:29.150296  257859 main.go:141] libmachine: Making call to close driver server
I0407 13:06:29.150309  257859 main.go:141] libmachine: (functional-709179) Calling .Close
I0407 13:06:29.150596  257859 main.go:141] libmachine: Successfully made call to close driver server
I0407 13:06:29.150613  257859 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 13:06:29.150621  257859 main.go:141] libmachine: Making call to close driver server
I0407 13:06:29.150628  257859 main.go:141] libmachine: (functional-709179) Calling .Close
I0407 13:06:29.150633  257859 main.go:141] libmachine: (functional-709179) DBG | Closing plugin on server side
I0407 13:06:29.150910  257859 main.go:141] libmachine: Successfully made call to close driver server
I0407 13:06:29.150929  257859 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 13:06:29.150967  257859 main.go:141] libmachine: (functional-709179) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-709179 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/kicbase/echo-server           | functional-709179  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-709179  | f77a7877330cc | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | 53a18edff8091 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-709179 image ls --format table --alsologtostderr:
I0407 13:06:29.409972  257907 out.go:345] Setting OutFile to fd 1 ...
I0407 13:06:29.410466  257907 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:06:29.410486  257907 out.go:358] Setting ErrFile to fd 2...
I0407 13:06:29.410493  257907 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:06:29.410963  257907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
I0407 13:06:29.411935  257907 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:06:29.412060  257907 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:06:29.412478  257907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 13:06:29.412543  257907 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 13:06:29.428410  257907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
I0407 13:06:29.429023  257907 main.go:141] libmachine: () Calling .GetVersion
I0407 13:06:29.429601  257907 main.go:141] libmachine: Using API Version  1
I0407 13:06:29.429624  257907 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 13:06:29.430027  257907 main.go:141] libmachine: () Calling .GetMachineName
I0407 13:06:29.430231  257907 main.go:141] libmachine: (functional-709179) Calling .GetState
I0407 13:06:29.432235  257907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 13:06:29.432299  257907 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 13:06:29.447347  257907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33933
I0407 13:06:29.447715  257907 main.go:141] libmachine: () Calling .GetVersion
I0407 13:06:29.448126  257907 main.go:141] libmachine: Using API Version  1
I0407 13:06:29.448154  257907 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 13:06:29.448509  257907 main.go:141] libmachine: () Calling .GetMachineName
I0407 13:06:29.448676  257907 main.go:141] libmachine: (functional-709179) Calling .DriverName
I0407 13:06:29.448876  257907 ssh_runner.go:195] Run: systemctl --version
I0407 13:06:29.448900  257907 main.go:141] libmachine: (functional-709179) Calling .GetSSHHostname
I0407 13:06:29.451438  257907 main.go:141] libmachine: (functional-709179) DBG | domain functional-709179 has defined MAC address 52:54:00:8f:9d:23 in network mk-functional-709179
I0407 13:06:29.451881  257907 main.go:141] libmachine: (functional-709179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:9d:23", ip: ""} in network mk-functional-709179: {Iface:virbr1 ExpiryTime:2025-04-07 14:03:44 +0000 UTC Type:0 Mac:52:54:00:8f:9d:23 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:functional-709179 Clientid:01:52:54:00:8f:9d:23}
I0407 13:06:29.451914  257907 main.go:141] libmachine: (functional-709179) DBG | domain functional-709179 has defined IP address 192.168.39.37 and MAC address 52:54:00:8f:9d:23 in network mk-functional-709179
I0407 13:06:29.452053  257907 main.go:141] libmachine: (functional-709179) Calling .GetSSHPort
I0407 13:06:29.452223  257907 main.go:141] libmachine: (functional-709179) Calling .GetSSHKeyPath
I0407 13:06:29.452345  257907 main.go:141] libmachine: (functional-709179) Calling .GetSSHUsername
I0407 13:06:29.452518  257907 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/functional-709179/id_rsa Username:docker}
I0407 13:06:29.532137  257907 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 13:06:29.576971  257907 main.go:141] libmachine: Making call to close driver server
I0407 13:06:29.576993  257907 main.go:141] libmachine: (functional-709179) Calling .Close
I0407 13:06:29.577294  257907 main.go:141] libmachine: Successfully made call to close driver server
I0407 13:06:29.577315  257907 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 13:06:29.577337  257907 main.go:141] libmachine: Making call to close driver server
I0407 13:06:29.577346  257907 main.go:141] libmachine: (functional-709179) Calling .Close
I0407 13:06:29.577344  257907 main.go:141] libmachine: (functional-709179) DBG | Closing plugin on server side
I0407 13:06:29.577593  257907 main.go:141] libmachine: Successfully made call to close driver server
I0407 13:06:29.577612  257907 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-709179 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"53a18edff80
91d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0","repoDigests":["docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19","docker.io/library/nginx@sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4"],"repoTags":["docker.io/library/nginx:latest"],"size":"196159380"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-709179"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064
c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@s
ha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"f77a7877330cc67830fcedd0bba43ffcff4c73617cf9af18c296a129c660359f","repoDigests":["localhost/minikube-local-cache-test@sha256:062b1ab335ab112ac78d4cb1f8260da5044ef6a7fc704dd91ccf8f6b05252769"],"repoTags":["localhost/minikube-local-cache-test:functional-709179"],"size":"3330"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:
399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325
e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-709179 image ls --format json --alsologtostderr:
I0407 13:06:29.199660  257883 out.go:345] Setting OutFile to fd 1 ...
I0407 13:06:29.199751  257883 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:06:29.199758  257883 out.go:358] Setting ErrFile to fd 2...
I0407 13:06:29.199762  257883 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:06:29.199930  257883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
I0407 13:06:29.200471  257883 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:06:29.200589  257883 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:06:29.201003  257883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 13:06:29.201086  257883 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 13:06:29.216501  257883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
I0407 13:06:29.217189  257883 main.go:141] libmachine: () Calling .GetVersion
I0407 13:06:29.218316  257883 main.go:141] libmachine: Using API Version  1
I0407 13:06:29.218341  257883 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 13:06:29.218911  257883 main.go:141] libmachine: () Calling .GetMachineName
I0407 13:06:29.219104  257883 main.go:141] libmachine: (functional-709179) Calling .GetState
I0407 13:06:29.221184  257883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 13:06:29.221221  257883 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 13:06:29.236256  257883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38077
I0407 13:06:29.236762  257883 main.go:141] libmachine: () Calling .GetVersion
I0407 13:06:29.237232  257883 main.go:141] libmachine: Using API Version  1
I0407 13:06:29.237258  257883 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 13:06:29.237586  257883 main.go:141] libmachine: () Calling .GetMachineName
I0407 13:06:29.237790  257883 main.go:141] libmachine: (functional-709179) Calling .DriverName
I0407 13:06:29.237978  257883 ssh_runner.go:195] Run: systemctl --version
I0407 13:06:29.238002  257883 main.go:141] libmachine: (functional-709179) Calling .GetSSHHostname
I0407 13:06:29.240652  257883 main.go:141] libmachine: (functional-709179) DBG | domain functional-709179 has defined MAC address 52:54:00:8f:9d:23 in network mk-functional-709179
I0407 13:06:29.241031  257883 main.go:141] libmachine: (functional-709179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:9d:23", ip: ""} in network mk-functional-709179: {Iface:virbr1 ExpiryTime:2025-04-07 14:03:44 +0000 UTC Type:0 Mac:52:54:00:8f:9d:23 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:functional-709179 Clientid:01:52:54:00:8f:9d:23}
I0407 13:06:29.241060  257883 main.go:141] libmachine: (functional-709179) DBG | domain functional-709179 has defined IP address 192.168.39.37 and MAC address 52:54:00:8f:9d:23 in network mk-functional-709179
I0407 13:06:29.241269  257883 main.go:141] libmachine: (functional-709179) Calling .GetSSHPort
I0407 13:06:29.241442  257883 main.go:141] libmachine: (functional-709179) Calling .GetSSHKeyPath
I0407 13:06:29.241594  257883 main.go:141] libmachine: (functional-709179) Calling .GetSSHUsername
I0407 13:06:29.241728  257883 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/functional-709179/id_rsa Username:docker}
I0407 13:06:29.319039  257883 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 13:06:29.359775  257883 main.go:141] libmachine: Making call to close driver server
I0407 13:06:29.359791  257883 main.go:141] libmachine: (functional-709179) Calling .Close
I0407 13:06:29.360085  257883 main.go:141] libmachine: Successfully made call to close driver server
I0407 13:06:29.360094  257883 main.go:141] libmachine: (functional-709179) DBG | Closing plugin on server side
I0407 13:06:29.360106  257883 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 13:06:29.360134  257883 main.go:141] libmachine: Making call to close driver server
I0407 13:06:29.360142  257883 main.go:141] libmachine: (functional-709179) Calling .Close
I0407 13:06:29.360404  257883 main.go:141] libmachine: Successfully made call to close driver server
I0407 13:06:29.360436  257883 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-709179 image ls --format yaml --alsologtostderr:
- id: a02c8a7123e65a4766d8f0a9ea76723d6d8909fec95937e580ac190614bedffd
repoDigests:
- docker.io/library/4023e4ae665ae8091dc0b2812cd030f0c2d3cd297b9212aa931675ca8cec7ed7-tmp@sha256:ebc504ee5986b5d8c8893855cae9e7e7b81fbd5aeb5673a0357e3a2d7cbd1b95
repoTags: []
size: "1466018"
- id: 53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0
repoDigests:
- docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19
- docker.io/library/nginx@sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4
repoTags:
- docker.io/library/nginx:latest
size: "196159380"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 7ce4f72a97316bea6b1ce0213313c642450857574c7061444b80d323c60718b9
repoDigests:
- localhost/my-image@sha256:4c0f46250b98b06865a28f29d9c634c7b9cecf1c00e072f977952435ed7953b3
repoTags:
- localhost/my-image:functional-709179
size: "1468600"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-709179
size: "4943877"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: f77a7877330cc67830fcedd0bba43ffcff4c73617cf9af18c296a129c660359f
repoDigests:
- localhost/minikube-local-cache-test@sha256:062b1ab335ab112ac78d4cb1f8260da5044ef6a7fc704dd91ccf8f6b05252769
repoTags:
- localhost/minikube-local-cache-test:functional-709179
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-709179 image ls --format yaml --alsologtostderr:
I0407 13:06:36.152856  258485 out.go:345] Setting OutFile to fd 1 ...
I0407 13:06:36.153119  258485 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:06:36.153130  258485 out.go:358] Setting ErrFile to fd 2...
I0407 13:06:36.153135  258485 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:06:36.153331  258485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
I0407 13:06:36.153884  258485 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:06:36.153980  258485 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:06:36.154347  258485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 13:06:36.154435  258485 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 13:06:36.170106  258485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
I0407 13:06:36.170617  258485 main.go:141] libmachine: () Calling .GetVersion
I0407 13:06:36.171220  258485 main.go:141] libmachine: Using API Version  1
I0407 13:06:36.171249  258485 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 13:06:36.171623  258485 main.go:141] libmachine: () Calling .GetMachineName
I0407 13:06:36.171823  258485 main.go:141] libmachine: (functional-709179) Calling .GetState
I0407 13:06:36.173999  258485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 13:06:36.174449  258485 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 13:06:36.191227  258485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
I0407 13:06:36.191758  258485 main.go:141] libmachine: () Calling .GetVersion
I0407 13:06:36.192264  258485 main.go:141] libmachine: Using API Version  1
I0407 13:06:36.192288  258485 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 13:06:36.192640  258485 main.go:141] libmachine: () Calling .GetMachineName
I0407 13:06:36.192847  258485 main.go:141] libmachine: (functional-709179) Calling .DriverName
I0407 13:06:36.193097  258485 ssh_runner.go:195] Run: systemctl --version
I0407 13:06:36.193132  258485 main.go:141] libmachine: (functional-709179) Calling .GetSSHHostname
I0407 13:06:36.198444  258485 main.go:141] libmachine: (functional-709179) DBG | domain functional-709179 has defined MAC address 52:54:00:8f:9d:23 in network mk-functional-709179
I0407 13:06:36.198957  258485 main.go:141] libmachine: (functional-709179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:9d:23", ip: ""} in network mk-functional-709179: {Iface:virbr1 ExpiryTime:2025-04-07 14:03:44 +0000 UTC Type:0 Mac:52:54:00:8f:9d:23 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:functional-709179 Clientid:01:52:54:00:8f:9d:23}
I0407 13:06:36.198988  258485 main.go:141] libmachine: (functional-709179) DBG | domain functional-709179 has defined IP address 192.168.39.37 and MAC address 52:54:00:8f:9d:23 in network mk-functional-709179
I0407 13:06:36.199134  258485 main.go:141] libmachine: (functional-709179) Calling .GetSSHPort
I0407 13:06:36.199305  258485 main.go:141] libmachine: (functional-709179) Calling .GetSSHKeyPath
I0407 13:06:36.199513  258485 main.go:141] libmachine: (functional-709179) Calling .GetSSHUsername
I0407 13:06:36.199822  258485 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/functional-709179/id_rsa Username:docker}
I0407 13:06:36.308879  258485 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 13:06:36.423123  258485 main.go:141] libmachine: Making call to close driver server
I0407 13:06:36.423145  258485 main.go:141] libmachine: (functional-709179) Calling .Close
I0407 13:06:36.423488  258485 main.go:141] libmachine: Successfully made call to close driver server
I0407 13:06:36.423534  258485 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 13:06:36.423554  258485 main.go:141] libmachine: Making call to close driver server
I0407 13:06:36.423563  258485 main.go:141] libmachine: (functional-709179) Calling .Close
I0407 13:06:36.423869  258485 main.go:141] libmachine: (functional-709179) DBG | Closing plugin on server side
I0407 13:06:36.423907  258485 main.go:141] libmachine: Successfully made call to close driver server
I0407 13:06:36.423913  258485 main.go:141] libmachine: Making call to close connection to plugin binary
2025/04/07 13:06:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709179 ssh pgrep buildkitd: exit status 1 (207.922679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image build -t localhost/my-image:functional-709179 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-709179 image build -t localhost/my-image:functional-709179 testdata/build --alsologtostderr: (6.410176226s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-709179 image build -t localhost/my-image:functional-709179 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a02c8a7123e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-709179
--> 7ce4f72a973
Successfully tagged localhost/my-image:functional-709179
7ce4f72a97316bea6b1ce0213313c642450857574c7061444b80d323c60718b9
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-709179 image build -t localhost/my-image:functional-709179 testdata/build --alsologtostderr:
I0407 13:06:29.833958  257977 out.go:345] Setting OutFile to fd 1 ...
I0407 13:06:29.834259  257977 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:06:29.834271  257977 out.go:358] Setting ErrFile to fd 2...
I0407 13:06:29.834276  257977 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:06:29.834470  257977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
I0407 13:06:29.835017  257977 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:06:29.835769  257977 config.go:182] Loaded profile config "functional-709179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0407 13:06:29.836238  257977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 13:06:29.836286  257977 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 13:06:29.852306  257977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
I0407 13:06:29.852783  257977 main.go:141] libmachine: () Calling .GetVersion
I0407 13:06:29.853299  257977 main.go:141] libmachine: Using API Version  1
I0407 13:06:29.853326  257977 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 13:06:29.853689  257977 main.go:141] libmachine: () Calling .GetMachineName
I0407 13:06:29.853899  257977 main.go:141] libmachine: (functional-709179) Calling .GetState
I0407 13:06:29.855504  257977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0407 13:06:29.855543  257977 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 13:06:29.870236  257977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33397
I0407 13:06:29.870681  257977 main.go:141] libmachine: () Calling .GetVersion
I0407 13:06:29.871139  257977 main.go:141] libmachine: Using API Version  1
I0407 13:06:29.871165  257977 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 13:06:29.871561  257977 main.go:141] libmachine: () Calling .GetMachineName
I0407 13:06:29.871738  257977 main.go:141] libmachine: (functional-709179) Calling .DriverName
I0407 13:06:29.871933  257977 ssh_runner.go:195] Run: systemctl --version
I0407 13:06:29.871957  257977 main.go:141] libmachine: (functional-709179) Calling .GetSSHHostname
I0407 13:06:29.874793  257977 main.go:141] libmachine: (functional-709179) DBG | domain functional-709179 has defined MAC address 52:54:00:8f:9d:23 in network mk-functional-709179
I0407 13:06:29.875217  257977 main.go:141] libmachine: (functional-709179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:9d:23", ip: ""} in network mk-functional-709179: {Iface:virbr1 ExpiryTime:2025-04-07 14:03:44 +0000 UTC Type:0 Mac:52:54:00:8f:9d:23 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:functional-709179 Clientid:01:52:54:00:8f:9d:23}
I0407 13:06:29.875262  257977 main.go:141] libmachine: (functional-709179) DBG | domain functional-709179 has defined IP address 192.168.39.37 and MAC address 52:54:00:8f:9d:23 in network mk-functional-709179
I0407 13:06:29.875352  257977 main.go:141] libmachine: (functional-709179) Calling .GetSSHPort
I0407 13:06:29.875524  257977 main.go:141] libmachine: (functional-709179) Calling .GetSSHKeyPath
I0407 13:06:29.875645  257977 main.go:141] libmachine: (functional-709179) Calling .GetSSHUsername
I0407 13:06:29.875789  257977 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/functional-709179/id_rsa Username:docker}
I0407 13:06:29.951510  257977 build_images.go:161] Building image from path: /tmp/build.3917526468.tar
I0407 13:06:29.951579  257977 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0407 13:06:29.965922  257977 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3917526468.tar
I0407 13:06:29.970787  257977 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3917526468.tar: stat -c "%s %y" /var/lib/minikube/build/build.3917526468.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3917526468.tar': No such file or directory
I0407 13:06:29.970817  257977 ssh_runner.go:362] scp /tmp/build.3917526468.tar --> /var/lib/minikube/build/build.3917526468.tar (3072 bytes)
I0407 13:06:29.997781  257977 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3917526468
I0407 13:06:30.009155  257977 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3917526468 -xf /var/lib/minikube/build/build.3917526468.tar
I0407 13:06:30.018824  257977 crio.go:315] Building image: /var/lib/minikube/build/build.3917526468
I0407 13:06:30.018926  257977 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-709179 /var/lib/minikube/build/build.3917526468 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0407 13:06:36.126285  257977 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-709179 /var/lib/minikube/build/build.3917526468 --cgroup-manager=cgroupfs: (6.107325744s)
I0407 13:06:36.126372  257977 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3917526468
I0407 13:06:36.179271  257977 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3917526468.tar
I0407 13:06:36.193990  257977 build_images.go:217] Built localhost/my-image:functional-709179 from /tmp/build.3917526468.tar
I0407 13:06:36.194024  257977 build_images.go:133] succeeded building to: functional-709179
I0407 13:06:36.194031  257977 build_images.go:134] failed building to: 
I0407 13:06:36.194064  257977 main.go:141] libmachine: Making call to close driver server
I0407 13:06:36.194079  257977 main.go:141] libmachine: (functional-709179) Calling .Close
I0407 13:06:36.194329  257977 main.go:141] libmachine: Successfully made call to close driver server
I0407 13:06:36.194345  257977 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 13:06:36.194354  257977 main.go:141] libmachine: Making call to close driver server
I0407 13:06:36.194361  257977 main.go:141] libmachine: (functional-709179) Calling .Close
I0407 13:06:36.194652  257977 main.go:141] libmachine: Successfully made call to close driver server
I0407 13:06:36.194674  257977 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.511520302s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-709179
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image load --daemon kicbase/echo-server:functional-709179 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-709179 image load --daemon kicbase/echo-server:functional-709179 --alsologtostderr: (1.210635048s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "277.509038ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "49.41629ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "308.446472ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "49.340768ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image load --daemon kicbase/echo-server:functional-709179 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-709179
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image load --daemon kicbase/echo-server:functional-709179 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image save kicbase/echo-server:functional-709179 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:397: (dbg) Done: out/minikube-linux-amd64 -p functional-709179 image save kicbase/echo-server:functional-709179 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.937492683s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image rm kicbase/echo-server:functional-709179 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: (dbg) Done: out/minikube-linux-amd64 -p functional-709179 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.863940798s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-709179
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 image save --daemon kicbase/echo-server:functional-709179 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-709179
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-709179 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-709179 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-nmp2l" [63f0aef8-4622-4411-95d6-9caa8a322bdf] Pending
helpers_test.go:344: "hello-node-fcfd88b6f-nmp2l" [63f0aef8-4622-4411-95d6-9caa8a322bdf] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003950436s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-709179 /tmp/TestFunctionalparallelMountCmdany-port315178868/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744031183180922291" to /tmp/TestFunctionalparallelMountCmdany-port315178868/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744031183180922291" to /tmp/TestFunctionalparallelMountCmdany-port315178868/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744031183180922291" to /tmp/TestFunctionalparallelMountCmdany-port315178868/001/test-1744031183180922291
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709179 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.753379ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 13:06:23.415022  249516 retry.go:31] will retry after 285.100172ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  7 13:06 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  7 13:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  7 13:06 test-1744031183180922291
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh cat /mount-9p/test-1744031183180922291
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-709179 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [58db901b-4797-47fc-b3a0-18d2910a880a] Pending
helpers_test.go:344: "busybox-mount" [58db901b-4797-47fc-b3a0-18d2910a880a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [58db901b-4797-47fc-b3a0-18d2910a880a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [58db901b-4797-47fc-b3a0-18d2910a880a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.002678901s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-709179 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-709179 /tmp/TestFunctionalparallelMountCmdany-port315178868/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 service list -o json
functional_test.go:1511: Took "443.990203ms" to run "out/minikube-linux-amd64 -p functional-709179 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.37:31855
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.37:31855
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-709179 /tmp/TestFunctionalparallelMountCmdspecific-port2057328990/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709179 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (271.082ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 13:06:33.116066  249516 retry.go:31] will retry after 500.901819ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-709179 /tmp/TestFunctionalparallelMountCmdspecific-port2057328990/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709179 ssh "sudo umount -f /mount-9p": exit status 1 (277.249991ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-709179 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-709179 /tmp/TestFunctionalparallelMountCmdspecific-port2057328990/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-709179 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1191149430/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-709179 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1191149430/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-709179 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1191149430/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709179 ssh "findmnt -T" /mount1: exit status 1 (299.25018ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 13:06:35.112706  249516 retry.go:31] will retry after 251.504533ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-709179 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-709179 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-709179 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1191149430/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-709179 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1191149430/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-709179 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1191149430/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.30s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-709179
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-709179
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-709179
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (193.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-730699 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0407 13:07:47.434152  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:08:15.148598  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-730699 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m13.035895005s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (193.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-730699 -- rollout status deployment/busybox: (6.731468325s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-2b2nj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-lv5mz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-pl5bc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-2b2nj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-lv5mz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-pl5bc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-2b2nj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-lv5mz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-pl5bc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-2b2nj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-2b2nj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-lv5mz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-lv5mz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-pl5bc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-730699 -- exec busybox-58667487b6-pl5bc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-730699 -v=7 --alsologtostderr
E0407 13:11:00.586158  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:00.592574  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:00.604052  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:00.625493  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:00.666953  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:00.748481  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:00.910149  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:01.231689  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:01.873180  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:03.155237  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:11:05.717589  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-730699 -v=7 --alsologtostderr: (58.700397152s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 status -v=7 --alsologtostderr
E0407 13:11:10.839403  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-730699 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp testdata/cp-test.txt ha-730699:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2627927430/001/cp-test_ha-730699.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699:/home/docker/cp-test.txt ha-730699-m02:/home/docker/cp-test_ha-730699_ha-730699-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m02 "sudo cat /home/docker/cp-test_ha-730699_ha-730699-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699:/home/docker/cp-test.txt ha-730699-m03:/home/docker/cp-test_ha-730699_ha-730699-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m03 "sudo cat /home/docker/cp-test_ha-730699_ha-730699-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699:/home/docker/cp-test.txt ha-730699-m04:/home/docker/cp-test_ha-730699_ha-730699-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m04 "sudo cat /home/docker/cp-test_ha-730699_ha-730699-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp testdata/cp-test.txt ha-730699-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2627927430/001/cp-test_ha-730699-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699-m02:/home/docker/cp-test.txt ha-730699:/home/docker/cp-test_ha-730699-m02_ha-730699.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699 "sudo cat /home/docker/cp-test_ha-730699-m02_ha-730699.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699-m02:/home/docker/cp-test.txt ha-730699-m03:/home/docker/cp-test_ha-730699-m02_ha-730699-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m03 "sudo cat /home/docker/cp-test_ha-730699-m02_ha-730699-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699-m02:/home/docker/cp-test.txt ha-730699-m04:/home/docker/cp-test_ha-730699-m02_ha-730699-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m04 "sudo cat /home/docker/cp-test_ha-730699-m02_ha-730699-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp testdata/cp-test.txt ha-730699-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2627927430/001/cp-test_ha-730699-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699-m03:/home/docker/cp-test.txt ha-730699:/home/docker/cp-test_ha-730699-m03_ha-730699.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699 "sudo cat /home/docker/cp-test_ha-730699-m03_ha-730699.txt"
E0407 13:11:21.081609  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699-m03:/home/docker/cp-test.txt ha-730699-m02:/home/docker/cp-test_ha-730699-m03_ha-730699-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m02 "sudo cat /home/docker/cp-test_ha-730699-m03_ha-730699-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699-m03:/home/docker/cp-test.txt ha-730699-m04:/home/docker/cp-test_ha-730699-m03_ha-730699-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m04 "sudo cat /home/docker/cp-test_ha-730699-m03_ha-730699-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp testdata/cp-test.txt ha-730699-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2627927430/001/cp-test_ha-730699-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699-m04:/home/docker/cp-test.txt ha-730699:/home/docker/cp-test_ha-730699-m04_ha-730699.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699 "sudo cat /home/docker/cp-test_ha-730699-m04_ha-730699.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699-m04:/home/docker/cp-test.txt ha-730699-m02:/home/docker/cp-test_ha-730699-m04_ha-730699-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m02 "sudo cat /home/docker/cp-test_ha-730699-m04_ha-730699-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 cp ha-730699-m04:/home/docker/cp-test.txt ha-730699-m03:/home/docker/cp-test_ha-730699-m04_ha-730699-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 ssh -n ha-730699-m03 "sudo cat /home/docker/cp-test_ha-730699-m04_ha-730699-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 node stop m02 -v=7 --alsologtostderr
E0407 13:11:41.563127  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:12:22.525029  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:12:47.434539  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-730699 node stop m02 -v=7 --alsologtostderr: (1m31.005586309s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-730699 status -v=7 --alsologtostderr: exit status 7 (651.274165ms)

                                                
                                                
-- stdout --
	ha-730699
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-730699-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-730699-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-730699-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:12:56.688548  263176 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:12:56.688706  263176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:12:56.688717  263176 out.go:358] Setting ErrFile to fd 2...
	I0407 13:12:56.688722  263176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:12:56.688984  263176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 13:12:56.689186  263176 out.go:352] Setting JSON to false
	I0407 13:12:56.689219  263176 mustload.go:65] Loading cluster: ha-730699
	I0407 13:12:56.689324  263176 notify.go:220] Checking for updates...
	I0407 13:12:56.689637  263176 config.go:182] Loaded profile config "ha-730699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:12:56.689660  263176 status.go:174] checking status of ha-730699 ...
	I0407 13:12:56.690203  263176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:12:56.690262  263176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:12:56.707743  263176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0407 13:12:56.708272  263176 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:12:56.708981  263176 main.go:141] libmachine: Using API Version  1
	I0407 13:12:56.709054  263176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:12:56.709435  263176 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:12:56.709626  263176 main.go:141] libmachine: (ha-730699) Calling .GetState
	I0407 13:12:56.711523  263176 status.go:371] ha-730699 host status = "Running" (err=<nil>)
	I0407 13:12:56.711542  263176 host.go:66] Checking if "ha-730699" exists ...
	I0407 13:12:56.711859  263176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:12:56.711905  263176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:12:56.727661  263176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33339
	I0407 13:12:56.728186  263176 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:12:56.728719  263176 main.go:141] libmachine: Using API Version  1
	I0407 13:12:56.728744  263176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:12:56.729158  263176 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:12:56.729379  263176 main.go:141] libmachine: (ha-730699) Calling .GetIP
	I0407 13:12:56.732007  263176 main.go:141] libmachine: (ha-730699) DBG | domain ha-730699 has defined MAC address 52:54:00:fc:2d:12 in network mk-ha-730699
	I0407 13:12:56.732596  263176 main.go:141] libmachine: (ha-730699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:2d:12", ip: ""} in network mk-ha-730699: {Iface:virbr1 ExpiryTime:2025-04-07 14:07:03 +0000 UTC Type:0 Mac:52:54:00:fc:2d:12 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:ha-730699 Clientid:01:52:54:00:fc:2d:12}
	I0407 13:12:56.732620  263176 main.go:141] libmachine: (ha-730699) DBG | domain ha-730699 has defined IP address 192.168.39.181 and MAC address 52:54:00:fc:2d:12 in network mk-ha-730699
	I0407 13:12:56.732782  263176 host.go:66] Checking if "ha-730699" exists ...
	I0407 13:12:56.733068  263176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:12:56.733113  263176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:12:56.748299  263176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39491
	I0407 13:12:56.748811  263176 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:12:56.749446  263176 main.go:141] libmachine: Using API Version  1
	I0407 13:12:56.749466  263176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:12:56.749851  263176 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:12:56.750023  263176 main.go:141] libmachine: (ha-730699) Calling .DriverName
	I0407 13:12:56.750212  263176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:12:56.750238  263176 main.go:141] libmachine: (ha-730699) Calling .GetSSHHostname
	I0407 13:12:56.752798  263176 main.go:141] libmachine: (ha-730699) DBG | domain ha-730699 has defined MAC address 52:54:00:fc:2d:12 in network mk-ha-730699
	I0407 13:12:56.753264  263176 main.go:141] libmachine: (ha-730699) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:2d:12", ip: ""} in network mk-ha-730699: {Iface:virbr1 ExpiryTime:2025-04-07 14:07:03 +0000 UTC Type:0 Mac:52:54:00:fc:2d:12 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:ha-730699 Clientid:01:52:54:00:fc:2d:12}
	I0407 13:12:56.753288  263176 main.go:141] libmachine: (ha-730699) DBG | domain ha-730699 has defined IP address 192.168.39.181 and MAC address 52:54:00:fc:2d:12 in network mk-ha-730699
	I0407 13:12:56.753403  263176 main.go:141] libmachine: (ha-730699) Calling .GetSSHPort
	I0407 13:12:56.753590  263176 main.go:141] libmachine: (ha-730699) Calling .GetSSHKeyPath
	I0407 13:12:56.753739  263176 main.go:141] libmachine: (ha-730699) Calling .GetSSHUsername
	I0407 13:12:56.753872  263176 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/ha-730699/id_rsa Username:docker}
	I0407 13:12:56.838068  263176 ssh_runner.go:195] Run: systemctl --version
	I0407 13:12:56.847334  263176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:12:56.863944  263176 kubeconfig.go:125] found "ha-730699" server: "https://192.168.39.254:8443"
	I0407 13:12:56.863994  263176 api_server.go:166] Checking apiserver status ...
	I0407 13:12:56.864065  263176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:12:56.885253  263176 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup
	W0407 13:12:56.896176  263176 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1139/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0407 13:12:56.896244  263176 ssh_runner.go:195] Run: ls
	I0407 13:12:56.900834  263176 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0407 13:12:56.905312  263176 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0407 13:12:56.905340  263176 status.go:463] ha-730699 apiserver status = Running (err=<nil>)
	I0407 13:12:56.905353  263176 status.go:176] ha-730699 status: &{Name:ha-730699 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:12:56.905376  263176 status.go:174] checking status of ha-730699-m02 ...
	I0407 13:12:56.905742  263176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:12:56.905802  263176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:12:56.921012  263176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36501
	I0407 13:12:56.921568  263176 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:12:56.922155  263176 main.go:141] libmachine: Using API Version  1
	I0407 13:12:56.922181  263176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:12:56.922595  263176 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:12:56.922783  263176 main.go:141] libmachine: (ha-730699-m02) Calling .GetState
	I0407 13:12:56.924389  263176 status.go:371] ha-730699-m02 host status = "Stopped" (err=<nil>)
	I0407 13:12:56.924404  263176 status.go:384] host is not running, skipping remaining checks
	I0407 13:12:56.924411  263176 status.go:176] ha-730699-m02 status: &{Name:ha-730699-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:12:56.924466  263176 status.go:174] checking status of ha-730699-m03 ...
	I0407 13:12:56.924761  263176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:12:56.924799  263176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:12:56.940098  263176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44047
	I0407 13:12:56.940661  263176 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:12:56.941171  263176 main.go:141] libmachine: Using API Version  1
	I0407 13:12:56.941204  263176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:12:56.941546  263176 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:12:56.941744  263176 main.go:141] libmachine: (ha-730699-m03) Calling .GetState
	I0407 13:12:56.943164  263176 status.go:371] ha-730699-m03 host status = "Running" (err=<nil>)
	I0407 13:12:56.943183  263176 host.go:66] Checking if "ha-730699-m03" exists ...
	I0407 13:12:56.943472  263176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:12:56.943515  263176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:12:56.959186  263176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38463
	I0407 13:12:56.959644  263176 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:12:56.960118  263176 main.go:141] libmachine: Using API Version  1
	I0407 13:12:56.960138  263176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:12:56.960441  263176 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:12:56.960618  263176 main.go:141] libmachine: (ha-730699-m03) Calling .GetIP
	I0407 13:12:56.963794  263176 main.go:141] libmachine: (ha-730699-m03) DBG | domain ha-730699-m03 has defined MAC address 52:54:00:d2:7b:5d in network mk-ha-730699
	I0407 13:12:56.964444  263176 main.go:141] libmachine: (ha-730699-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:7b:5d", ip: ""} in network mk-ha-730699: {Iface:virbr1 ExpiryTime:2025-04-07 14:09:01 +0000 UTC Type:0 Mac:52:54:00:d2:7b:5d Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-730699-m03 Clientid:01:52:54:00:d2:7b:5d}
	I0407 13:12:56.964517  263176 main.go:141] libmachine: (ha-730699-m03) DBG | domain ha-730699-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:d2:7b:5d in network mk-ha-730699
	I0407 13:12:56.964623  263176 host.go:66] Checking if "ha-730699-m03" exists ...
	I0407 13:12:56.964945  263176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:12:56.964980  263176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:12:56.981597  263176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
	I0407 13:12:56.982005  263176 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:12:56.982469  263176 main.go:141] libmachine: Using API Version  1
	I0407 13:12:56.982489  263176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:12:56.982850  263176 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:12:56.983069  263176 main.go:141] libmachine: (ha-730699-m03) Calling .DriverName
	I0407 13:12:56.983244  263176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:12:56.983268  263176 main.go:141] libmachine: (ha-730699-m03) Calling .GetSSHHostname
	I0407 13:12:56.986009  263176 main.go:141] libmachine: (ha-730699-m03) DBG | domain ha-730699-m03 has defined MAC address 52:54:00:d2:7b:5d in network mk-ha-730699
	I0407 13:12:56.986426  263176 main.go:141] libmachine: (ha-730699-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:7b:5d", ip: ""} in network mk-ha-730699: {Iface:virbr1 ExpiryTime:2025-04-07 14:09:01 +0000 UTC Type:0 Mac:52:54:00:d2:7b:5d Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-730699-m03 Clientid:01:52:54:00:d2:7b:5d}
	I0407 13:12:56.986451  263176 main.go:141] libmachine: (ha-730699-m03) DBG | domain ha-730699-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:d2:7b:5d in network mk-ha-730699
	I0407 13:12:56.986539  263176 main.go:141] libmachine: (ha-730699-m03) Calling .GetSSHPort
	I0407 13:12:56.986712  263176 main.go:141] libmachine: (ha-730699-m03) Calling .GetSSHKeyPath
	I0407 13:12:56.986885  263176 main.go:141] libmachine: (ha-730699-m03) Calling .GetSSHUsername
	I0407 13:12:56.987034  263176 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/ha-730699-m03/id_rsa Username:docker}
	I0407 13:12:57.069554  263176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:12:57.087087  263176 kubeconfig.go:125] found "ha-730699" server: "https://192.168.39.254:8443"
	I0407 13:12:57.087125  263176 api_server.go:166] Checking apiserver status ...
	I0407 13:12:57.087163  263176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:12:57.104132  263176 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1480/cgroup
	W0407 13:12:57.113919  263176 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1480/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0407 13:12:57.113989  263176 ssh_runner.go:195] Run: ls
	I0407 13:12:57.118675  263176 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0407 13:12:57.125374  263176 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0407 13:12:57.125409  263176 status.go:463] ha-730699-m03 apiserver status = Running (err=<nil>)
	I0407 13:12:57.125421  263176 status.go:176] ha-730699-m03 status: &{Name:ha-730699-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:12:57.125442  263176 status.go:174] checking status of ha-730699-m04 ...
	I0407 13:12:57.125807  263176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:12:57.125865  263176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:12:57.141479  263176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42941
	I0407 13:12:57.141929  263176 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:12:57.142384  263176 main.go:141] libmachine: Using API Version  1
	I0407 13:12:57.142405  263176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:12:57.142770  263176 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:12:57.142994  263176 main.go:141] libmachine: (ha-730699-m04) Calling .GetState
	I0407 13:12:57.145075  263176 status.go:371] ha-730699-m04 host status = "Running" (err=<nil>)
	I0407 13:12:57.145096  263176 host.go:66] Checking if "ha-730699-m04" exists ...
	I0407 13:12:57.145376  263176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:12:57.145423  263176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:12:57.162882  263176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0407 13:12:57.163440  263176 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:12:57.163964  263176 main.go:141] libmachine: Using API Version  1
	I0407 13:12:57.163996  263176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:12:57.164373  263176 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:12:57.164566  263176 main.go:141] libmachine: (ha-730699-m04) Calling .GetIP
	I0407 13:12:57.167782  263176 main.go:141] libmachine: (ha-730699-m04) DBG | domain ha-730699-m04 has defined MAC address 52:54:00:fb:ca:88 in network mk-ha-730699
	I0407 13:12:57.168266  263176 main.go:141] libmachine: (ha-730699-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ca:88", ip: ""} in network mk-ha-730699: {Iface:virbr1 ExpiryTime:2025-04-07 14:10:28 +0000 UTC Type:0 Mac:52:54:00:fb:ca:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-730699-m04 Clientid:01:52:54:00:fb:ca:88}
	I0407 13:12:57.168289  263176 main.go:141] libmachine: (ha-730699-m04) DBG | domain ha-730699-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:fb:ca:88 in network mk-ha-730699
	I0407 13:12:57.168468  263176 host.go:66] Checking if "ha-730699-m04" exists ...
	I0407 13:12:57.168760  263176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:12:57.168804  263176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:12:57.184515  263176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37247
	I0407 13:12:57.184922  263176 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:12:57.185388  263176 main.go:141] libmachine: Using API Version  1
	I0407 13:12:57.185411  263176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:12:57.185750  263176 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:12:57.185934  263176 main.go:141] libmachine: (ha-730699-m04) Calling .DriverName
	I0407 13:12:57.186140  263176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:12:57.186162  263176 main.go:141] libmachine: (ha-730699-m04) Calling .GetSSHHostname
	I0407 13:12:57.188870  263176 main.go:141] libmachine: (ha-730699-m04) DBG | domain ha-730699-m04 has defined MAC address 52:54:00:fb:ca:88 in network mk-ha-730699
	I0407 13:12:57.189252  263176 main.go:141] libmachine: (ha-730699-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:ca:88", ip: ""} in network mk-ha-730699: {Iface:virbr1 ExpiryTime:2025-04-07 14:10:28 +0000 UTC Type:0 Mac:52:54:00:fb:ca:88 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-730699-m04 Clientid:01:52:54:00:fb:ca:88}
	I0407 13:12:57.189279  263176 main.go:141] libmachine: (ha-730699-m04) DBG | domain ha-730699-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:fb:ca:88 in network mk-ha-730699
	I0407 13:12:57.189431  263176 main.go:141] libmachine: (ha-730699-m04) Calling .GetSSHPort
	I0407 13:12:57.189599  263176 main.go:141] libmachine: (ha-730699-m04) Calling .GetSSHKeyPath
	I0407 13:12:57.189741  263176 main.go:141] libmachine: (ha-730699-m04) Calling .GetSSHUsername
	I0407 13:12:57.189895  263176 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/ha-730699-m04/id_rsa Username:docker}
	I0407 13:12:57.273664  263176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:12:57.290224  263176 status.go:176] ha-730699-m04 status: &{Name:ha-730699-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 node start m02 -v=7 --alsologtostderr
E0407 13:13:44.447046  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-730699 node start m02 -v=7 --alsologtostderr: (51.848222935s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (52.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (436.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-730699 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-730699 -v=7 --alsologtostderr
E0407 13:16:00.586397  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:16:28.288466  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:47.434117  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-730699 -v=7 --alsologtostderr: (4m33.939378989s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-730699 --wait=true -v=7 --alsologtostderr
E0407 13:19:10.510305  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:21:00.586460  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-730699 --wait=true -v=7 --alsologtostderr: (2m42.859980754s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-730699
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (436.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-730699 node delete m03 -v=7 --alsologtostderr: (18.417488802s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (19.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 stop -v=7 --alsologtostderr
E0407 13:22:47.434187  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:26:00.586199  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-730699 stop -v=7 --alsologtostderr: (4m32.649963613s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-730699 status -v=7 --alsologtostderr: exit status 7 (108.778936ms)

                                                
                                                
-- stdout --
	ha-730699
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-730699-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-730699-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:26:01.009928  267800 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:26:01.010210  267800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:26:01.010222  267800 out.go:358] Setting ErrFile to fd 2...
	I0407 13:26:01.010226  267800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:26:01.010506  267800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 13:26:01.010840  267800 out.go:352] Setting JSON to false
	I0407 13:26:01.010881  267800 mustload.go:65] Loading cluster: ha-730699
	I0407 13:26:01.010983  267800 notify.go:220] Checking for updates...
	I0407 13:26:01.011321  267800 config.go:182] Loaded profile config "ha-730699": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:26:01.011351  267800 status.go:174] checking status of ha-730699 ...
	I0407 13:26:01.011849  267800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:26:01.011919  267800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:26:01.028786  267800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0407 13:26:01.029375  267800 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:26:01.029961  267800 main.go:141] libmachine: Using API Version  1
	I0407 13:26:01.029992  267800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:26:01.030448  267800 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:26:01.030677  267800 main.go:141] libmachine: (ha-730699) Calling .GetState
	I0407 13:26:01.032508  267800 status.go:371] ha-730699 host status = "Stopped" (err=<nil>)
	I0407 13:26:01.032530  267800 status.go:384] host is not running, skipping remaining checks
	I0407 13:26:01.032538  267800 status.go:176] ha-730699 status: &{Name:ha-730699 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:26:01.032576  267800 status.go:174] checking status of ha-730699-m02 ...
	I0407 13:26:01.032889  267800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:26:01.032917  267800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:26:01.048177  267800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I0407 13:26:01.048601  267800 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:26:01.048996  267800 main.go:141] libmachine: Using API Version  1
	I0407 13:26:01.049011  267800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:26:01.049333  267800 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:26:01.049491  267800 main.go:141] libmachine: (ha-730699-m02) Calling .GetState
	I0407 13:26:01.050975  267800 status.go:371] ha-730699-m02 host status = "Stopped" (err=<nil>)
	I0407 13:26:01.050987  267800 status.go:384] host is not running, skipping remaining checks
	I0407 13:26:01.050993  267800 status.go:176] ha-730699-m02 status: &{Name:ha-730699-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:26:01.051010  267800 status.go:174] checking status of ha-730699-m04 ...
	I0407 13:26:01.051346  267800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:26:01.051392  267800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:26:01.066161  267800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0407 13:26:01.066605  267800 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:26:01.067092  267800 main.go:141] libmachine: Using API Version  1
	I0407 13:26:01.067122  267800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:26:01.067536  267800 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:26:01.067747  267800 main.go:141] libmachine: (ha-730699-m04) Calling .GetState
	I0407 13:26:01.069305  267800 status.go:371] ha-730699-m04 host status = "Stopped" (err=<nil>)
	I0407 13:26:01.069320  267800 status.go:384] host is not running, skipping remaining checks
	I0407 13:26:01.069326  267800 status.go:176] ha-730699-m04 status: &{Name:ha-730699-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (128.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-730699 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0407 13:27:23.650727  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:27:47.434122  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-730699 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m8.178820857s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (128.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-730699 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-730699 --control-plane -v=7 --alsologtostderr: (1m16.810060468s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-730699 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (87.5s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-041718 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-041718 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m27.499156141s)
--- PASS: TestJSONOutput/start/Command (87.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-041718 --output=json --user=testUser
E0407 13:31:00.586889  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-041718 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-041718 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-041718 --output=json --user=testUser: (7.345516906s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-251547 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-251547 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.120076ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5e6a2d7-08ff-4ffe-8e99-149c8041c0ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-251547] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a092daf-8175-4444-88ed-b8c454e1657f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20598"}}
	{"specversion":"1.0","id":"ee4456e6-8ea0-4e76-aea2-305ea425bd65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2c22c61e-89d6-45af-a46f-1f67f1cc2753","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig"}}
	{"specversion":"1.0","id":"73f8fee6-f00a-42c8-a829-28215f2a4d3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube"}}
	{"specversion":"1.0","id":"96254bf8-fc83-4a61-8ca9-b897f697481e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c243f873-6dd2-4a34-afd5-f322478c8d3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7c40c323-4f12-44f1-9ab8-e8e9dc44fbbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-251547" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-251547
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (91.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-504923 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-504923 --driver=kvm2  --container-runtime=crio: (44.440346692s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-516843 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-516843 --driver=kvm2  --container-runtime=crio: (44.366197423s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-504923
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-516843
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-516843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-516843
helpers_test.go:175: Cleaning up "first-504923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-504923
--- PASS: TestMinikubeProfile (91.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-952264 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0407 13:32:47.438457  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-952264 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.41491301s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-952264 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-952264 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-971867 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-971867 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.484965204s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971867 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971867 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-952264 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971867 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971867 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-971867
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-971867: (1.282724659s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-971867
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-971867: (20.725229229s)
--- PASS: TestMountStart/serial/RestartStopped (21.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971867 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971867 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054683 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0407 13:35:50.512622  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:36:00.587035  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-054683 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.463960683s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-054683 -- rollout status deployment/busybox: (5.019115337s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- exec busybox-58667487b6-28jf2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- exec busybox-58667487b6-2z4xm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- exec busybox-58667487b6-28jf2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- exec busybox-58667487b6-2z4xm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- exec busybox-58667487b6-28jf2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- exec busybox-58667487b6-2z4xm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- exec busybox-58667487b6-28jf2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- exec busybox-58667487b6-28jf2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- exec busybox-58667487b6-2z4xm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-054683 -- exec busybox-58667487b6-2z4xm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-054683 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-054683 -v 3 --alsologtostderr: (49.427568288s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.02s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-054683 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 cp testdata/cp-test.txt multinode-054683:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 cp multinode-054683:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2645375695/001/cp-test_multinode-054683.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 cp multinode-054683:/home/docker/cp-test.txt multinode-054683-m02:/home/docker/cp-test_multinode-054683_multinode-054683-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683-m02 "sudo cat /home/docker/cp-test_multinode-054683_multinode-054683-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 cp multinode-054683:/home/docker/cp-test.txt multinode-054683-m03:/home/docker/cp-test_multinode-054683_multinode-054683-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683-m03 "sudo cat /home/docker/cp-test_multinode-054683_multinode-054683-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 cp testdata/cp-test.txt multinode-054683-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 cp multinode-054683-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2645375695/001/cp-test_multinode-054683-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 cp multinode-054683-m02:/home/docker/cp-test.txt multinode-054683:/home/docker/cp-test_multinode-054683-m02_multinode-054683.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683 "sudo cat /home/docker/cp-test_multinode-054683-m02_multinode-054683.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 cp multinode-054683-m02:/home/docker/cp-test.txt multinode-054683-m03:/home/docker/cp-test_multinode-054683-m02_multinode-054683-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683-m03 "sudo cat /home/docker/cp-test_multinode-054683-m02_multinode-054683-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 cp testdata/cp-test.txt multinode-054683-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 cp multinode-054683-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2645375695/001/cp-test_multinode-054683-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 cp multinode-054683-m03:/home/docker/cp-test.txt multinode-054683:/home/docker/cp-test_multinode-054683-m03_multinode-054683.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683 "sudo cat /home/docker/cp-test_multinode-054683-m03_multinode-054683.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 cp multinode-054683-m03:/home/docker/cp-test.txt multinode-054683-m02:/home/docker/cp-test_multinode-054683-m03_multinode-054683-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 ssh -n multinode-054683-m02 "sudo cat /home/docker/cp-test_multinode-054683-m03_multinode-054683-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-054683 node stop m03: (1.472212625s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-054683 status: exit status 7 (438.224466ms)

                                                
                                                
-- stdout --
	multinode-054683
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-054683-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-054683-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-054683 status --alsologtostderr: exit status 7 (422.916481ms)

                                                
                                                
-- stdout --
	multinode-054683
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-054683-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-054683-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:37:13.819313  275699 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:37:13.819537  275699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:37:13.819544  275699 out.go:358] Setting ErrFile to fd 2...
	I0407 13:37:13.819548  275699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:37:13.819715  275699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 13:37:13.819898  275699 out.go:352] Setting JSON to false
	I0407 13:37:13.819931  275699 mustload.go:65] Loading cluster: multinode-054683
	I0407 13:37:13.820061  275699 notify.go:220] Checking for updates...
	I0407 13:37:13.820327  275699 config.go:182] Loaded profile config "multinode-054683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:37:13.820349  275699 status.go:174] checking status of multinode-054683 ...
	I0407 13:37:13.820793  275699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:37:13.820849  275699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:37:13.837359  275699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42021
	I0407 13:37:13.837820  275699 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:37:13.838348  275699 main.go:141] libmachine: Using API Version  1
	I0407 13:37:13.838369  275699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:37:13.838789  275699 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:37:13.839026  275699 main.go:141] libmachine: (multinode-054683) Calling .GetState
	I0407 13:37:13.840764  275699 status.go:371] multinode-054683 host status = "Running" (err=<nil>)
	I0407 13:37:13.840782  275699 host.go:66] Checking if "multinode-054683" exists ...
	I0407 13:37:13.841086  275699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:37:13.841125  275699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:37:13.856204  275699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36443
	I0407 13:37:13.856696  275699 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:37:13.857158  275699 main.go:141] libmachine: Using API Version  1
	I0407 13:37:13.857179  275699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:37:13.857483  275699 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:37:13.857676  275699 main.go:141] libmachine: (multinode-054683) Calling .GetIP
	I0407 13:37:13.860381  275699 main.go:141] libmachine: (multinode-054683) DBG | domain multinode-054683 has defined MAC address 52:54:00:cd:26:81 in network mk-multinode-054683
	I0407 13:37:13.860879  275699 main.go:141] libmachine: (multinode-054683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:26:81", ip: ""} in network mk-multinode-054683: {Iface:virbr1 ExpiryTime:2025-04-07 14:34:24 +0000 UTC Type:0 Mac:52:54:00:cd:26:81 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-054683 Clientid:01:52:54:00:cd:26:81}
	I0407 13:37:13.860913  275699 main.go:141] libmachine: (multinode-054683) DBG | domain multinode-054683 has defined IP address 192.168.39.180 and MAC address 52:54:00:cd:26:81 in network mk-multinode-054683
	I0407 13:37:13.861022  275699 host.go:66] Checking if "multinode-054683" exists ...
	I0407 13:37:13.861311  275699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:37:13.861346  275699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:37:13.876799  275699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36559
	I0407 13:37:13.877272  275699 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:37:13.877787  275699 main.go:141] libmachine: Using API Version  1
	I0407 13:37:13.877824  275699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:37:13.878138  275699 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:37:13.878319  275699 main.go:141] libmachine: (multinode-054683) Calling .DriverName
	I0407 13:37:13.878496  275699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:37:13.878519  275699 main.go:141] libmachine: (multinode-054683) Calling .GetSSHHostname
	I0407 13:37:13.881428  275699 main.go:141] libmachine: (multinode-054683) DBG | domain multinode-054683 has defined MAC address 52:54:00:cd:26:81 in network mk-multinode-054683
	I0407 13:37:13.881829  275699 main.go:141] libmachine: (multinode-054683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:26:81", ip: ""} in network mk-multinode-054683: {Iface:virbr1 ExpiryTime:2025-04-07 14:34:24 +0000 UTC Type:0 Mac:52:54:00:cd:26:81 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-054683 Clientid:01:52:54:00:cd:26:81}
	I0407 13:37:13.881857  275699 main.go:141] libmachine: (multinode-054683) DBG | domain multinode-054683 has defined IP address 192.168.39.180 and MAC address 52:54:00:cd:26:81 in network mk-multinode-054683
	I0407 13:37:13.882025  275699 main.go:141] libmachine: (multinode-054683) Calling .GetSSHPort
	I0407 13:37:13.882215  275699 main.go:141] libmachine: (multinode-054683) Calling .GetSSHKeyPath
	I0407 13:37:13.882371  275699 main.go:141] libmachine: (multinode-054683) Calling .GetSSHUsername
	I0407 13:37:13.882531  275699 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/multinode-054683/id_rsa Username:docker}
	I0407 13:37:13.960567  275699 ssh_runner.go:195] Run: systemctl --version
	I0407 13:37:13.966396  275699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:37:13.981109  275699 kubeconfig.go:125] found "multinode-054683" server: "https://192.168.39.180:8443"
	I0407 13:37:13.981148  275699 api_server.go:166] Checking apiserver status ...
	I0407 13:37:13.981208  275699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:37:13.995715  275699 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup
	W0407 13:37:14.005555  275699 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1092/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0407 13:37:14.005610  275699 ssh_runner.go:195] Run: ls
	I0407 13:37:14.009810  275699 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I0407 13:37:14.014246  275699 api_server.go:279] https://192.168.39.180:8443/healthz returned 200:
	ok
	I0407 13:37:14.014268  275699 status.go:463] multinode-054683 apiserver status = Running (err=<nil>)
	I0407 13:37:14.014277  275699 status.go:176] multinode-054683 status: &{Name:multinode-054683 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:37:14.014292  275699 status.go:174] checking status of multinode-054683-m02 ...
	I0407 13:37:14.014599  275699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:37:14.014633  275699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:37:14.031201  275699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I0407 13:37:14.031655  275699 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:37:14.032088  275699 main.go:141] libmachine: Using API Version  1
	I0407 13:37:14.032114  275699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:37:14.032469  275699 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:37:14.032645  275699 main.go:141] libmachine: (multinode-054683-m02) Calling .GetState
	I0407 13:37:14.034249  275699 status.go:371] multinode-054683-m02 host status = "Running" (err=<nil>)
	I0407 13:37:14.034272  275699 host.go:66] Checking if "multinode-054683-m02" exists ...
	I0407 13:37:14.034604  275699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:37:14.034665  275699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:37:14.050917  275699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41441
	I0407 13:37:14.051484  275699 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:37:14.052054  275699 main.go:141] libmachine: Using API Version  1
	I0407 13:37:14.052077  275699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:37:14.052419  275699 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:37:14.052653  275699 main.go:141] libmachine: (multinode-054683-m02) Calling .GetIP
	I0407 13:37:14.055232  275699 main.go:141] libmachine: (multinode-054683-m02) DBG | domain multinode-054683-m02 has defined MAC address 52:54:00:17:31:c8 in network mk-multinode-054683
	I0407 13:37:14.055686  275699 main.go:141] libmachine: (multinode-054683-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:31:c8", ip: ""} in network mk-multinode-054683: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:31 +0000 UTC Type:0 Mac:52:54:00:17:31:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-054683-m02 Clientid:01:52:54:00:17:31:c8}
	I0407 13:37:14.055714  275699 main.go:141] libmachine: (multinode-054683-m02) DBG | domain multinode-054683-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:17:31:c8 in network mk-multinode-054683
	I0407 13:37:14.055884  275699 host.go:66] Checking if "multinode-054683-m02" exists ...
	I0407 13:37:14.056199  275699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:37:14.056238  275699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:37:14.071524  275699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I0407 13:37:14.072011  275699 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:37:14.072646  275699 main.go:141] libmachine: Using API Version  1
	I0407 13:37:14.072670  275699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:37:14.073039  275699 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:37:14.073209  275699 main.go:141] libmachine: (multinode-054683-m02) Calling .DriverName
	I0407 13:37:14.073374  275699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:37:14.073397  275699 main.go:141] libmachine: (multinode-054683-m02) Calling .GetSSHHostname
	I0407 13:37:14.076011  275699 main.go:141] libmachine: (multinode-054683-m02) DBG | domain multinode-054683-m02 has defined MAC address 52:54:00:17:31:c8 in network mk-multinode-054683
	I0407 13:37:14.076534  275699 main.go:141] libmachine: (multinode-054683-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:31:c8", ip: ""} in network mk-multinode-054683: {Iface:virbr1 ExpiryTime:2025-04-07 14:35:31 +0000 UTC Type:0 Mac:52:54:00:17:31:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-054683-m02 Clientid:01:52:54:00:17:31:c8}
	I0407 13:37:14.076573  275699 main.go:141] libmachine: (multinode-054683-m02) DBG | domain multinode-054683-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:17:31:c8 in network mk-multinode-054683
	I0407 13:37:14.076710  275699 main.go:141] libmachine: (multinode-054683-m02) Calling .GetSSHPort
	I0407 13:37:14.076878  275699 main.go:141] libmachine: (multinode-054683-m02) Calling .GetSSHKeyPath
	I0407 13:37:14.077012  275699 main.go:141] libmachine: (multinode-054683-m02) Calling .GetSSHUsername
	I0407 13:37:14.077202  275699 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20598-242355/.minikube/machines/multinode-054683-m02/id_rsa Username:docker}
	I0407 13:37:14.160012  275699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:37:14.175597  275699 status.go:176] multinode-054683-m02 status: &{Name:multinode-054683-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:37:14.175644  275699 status.go:174] checking status of multinode-054683-m03 ...
	I0407 13:37:14.176227  275699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:37:14.176287  275699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:37:14.192043  275699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40655
	I0407 13:37:14.192527  275699 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:37:14.192993  275699 main.go:141] libmachine: Using API Version  1
	I0407 13:37:14.193011  275699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:37:14.193331  275699 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:37:14.193495  275699 main.go:141] libmachine: (multinode-054683-m03) Calling .GetState
	I0407 13:37:14.195065  275699 status.go:371] multinode-054683-m03 host status = "Stopped" (err=<nil>)
	I0407 13:37:14.195081  275699 status.go:384] host is not running, skipping remaining checks
	I0407 13:37:14.195087  275699 status.go:176] multinode-054683-m03 status: &{Name:multinode-054683-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 node start m03 -v=7 --alsologtostderr
E0407 13:37:47.434632  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-054683 node start m03 -v=7 --alsologtostderr: (39.059440568s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (343.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-054683
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-054683
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-054683: (3m3.144820263s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054683 --wait=true -v=8 --alsologtostderr
E0407 13:41:00.586808  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:42:47.433832  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-054683 --wait=true -v=8 --alsologtostderr: (2m40.235836279s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-054683
--- PASS: TestMultiNode/serial/RestartKeepsNodes (343.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-054683 node delete m03: (2.072146099s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 stop
E0407 13:44:03.655047  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:00.589133  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-054683 stop: (3m1.502001707s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-054683 status: exit status 7 (85.76684ms)

                                                
                                                
-- stdout --
	multinode-054683
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-054683-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-054683 status --alsologtostderr: exit status 7 (85.497848ms)

                                                
                                                
-- stdout --
	multinode-054683
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-054683-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:46:41.588599  278728 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:46:41.588891  278728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:46:41.588901  278728 out.go:358] Setting ErrFile to fd 2...
	I0407 13:46:41.588905  278728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:46:41.589142  278728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 13:46:41.589354  278728 out.go:352] Setting JSON to false
	I0407 13:46:41.589392  278728 mustload.go:65] Loading cluster: multinode-054683
	I0407 13:46:41.589497  278728 notify.go:220] Checking for updates...
	I0407 13:46:41.589891  278728 config.go:182] Loaded profile config "multinode-054683": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:46:41.589918  278728 status.go:174] checking status of multinode-054683 ...
	I0407 13:46:41.590362  278728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:46:41.590415  278728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:46:41.605980  278728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39575
	I0407 13:46:41.606430  278728 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:46:41.606966  278728 main.go:141] libmachine: Using API Version  1
	I0407 13:46:41.606989  278728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:46:41.607420  278728 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:46:41.607612  278728 main.go:141] libmachine: (multinode-054683) Calling .GetState
	I0407 13:46:41.609214  278728 status.go:371] multinode-054683 host status = "Stopped" (err=<nil>)
	I0407 13:46:41.609231  278728 status.go:384] host is not running, skipping remaining checks
	I0407 13:46:41.609239  278728 status.go:176] multinode-054683 status: &{Name:multinode-054683 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:46:41.609264  278728 status.go:174] checking status of multinode-054683-m02 ...
	I0407 13:46:41.609707  278728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0407 13:46:41.609764  278728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 13:46:41.624681  278728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33709
	I0407 13:46:41.625033  278728 main.go:141] libmachine: () Calling .GetVersion
	I0407 13:46:41.625459  278728 main.go:141] libmachine: Using API Version  1
	I0407 13:46:41.625480  278728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 13:46:41.625806  278728 main.go:141] libmachine: () Calling .GetMachineName
	I0407 13:46:41.625998  278728 main.go:141] libmachine: (multinode-054683-m02) Calling .GetState
	I0407 13:46:41.627402  278728 status.go:371] multinode-054683-m02 host status = "Stopped" (err=<nil>)
	I0407 13:46:41.627420  278728 status.go:384] host is not running, skipping remaining checks
	I0407 13:46:41.627427  278728 status.go:176] multinode-054683-m02 status: &{Name:multinode-054683-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (154.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054683 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0407 13:47:47.437720  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-054683 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m34.137309088s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-054683 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (154.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-054683
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054683-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-054683-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.563285ms)

                                                
                                                
-- stdout --
	* [multinode-054683-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-054683-m02' is duplicated with machine name 'multinode-054683-m02' in profile 'multinode-054683'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-054683-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-054683-m03 --driver=kvm2  --container-runtime=crio: (43.164480555s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-054683
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-054683: exit status 80 (222.05169ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-054683 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-054683-m03 already exists in multinode-054683-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-054683-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-054683-m03: (1.008995037s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.50s)

                                                
                                    
x
+
TestScheduledStopUnix (115.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-276845 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-276845 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.047974038s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-276845 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-276845 -n scheduled-stop-276845
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-276845 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0407 13:53:35.163737  249516 retry.go:31] will retry after 145.073µs: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.164905  249516 retry.go:31] will retry after 161.271µs: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.166063  249516 retry.go:31] will retry after 206.453µs: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.167230  249516 retry.go:31] will retry after 253.1µs: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.168372  249516 retry.go:31] will retry after 698.451µs: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.169493  249516 retry.go:31] will retry after 961.581µs: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.170651  249516 retry.go:31] will retry after 1.116027ms: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.172866  249516 retry.go:31] will retry after 1.82929ms: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.175077  249516 retry.go:31] will retry after 3.521689ms: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.179296  249516 retry.go:31] will retry after 2.610732ms: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.182487  249516 retry.go:31] will retry after 8.23129ms: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.191706  249516 retry.go:31] will retry after 7.711653ms: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.199958  249516 retry.go:31] will retry after 17.462465ms: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.218236  249516 retry.go:31] will retry after 28.350004ms: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
I0407 13:53:35.247493  249516 retry.go:31] will retry after 30.91835ms: open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/scheduled-stop-276845/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-276845 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-276845 -n scheduled-stop-276845
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-276845
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-276845 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-276845
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-276845: exit status 7 (66.72ms)

                                                
                                                
-- stdout --
	scheduled-stop-276845
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-276845 -n scheduled-stop-276845
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-276845 -n scheduled-stop-276845: exit status 7 (68.76362ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-276845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-276845
--- PASS: TestScheduledStopUnix (115.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (228.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1974246837 start -p running-upgrade-017658 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1974246837 start -p running-upgrade-017658 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m0.167228094s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-017658 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-017658 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m46.718735935s)
helpers_test.go:175: Cleaning up "running-upgrade-017658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-017658
--- PASS: TestRunningBinaryUpgrade (228.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-812476 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-812476 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (79.490524ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-812476] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (101.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-812476 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-812476 --driver=kvm2  --container-runtime=crio: (1m40.864779563s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-812476 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (101.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-471753 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-471753 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (113.009681ms)

                                                
                                                
-- stdout --
	* [false-471753] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:54:49.524541  283225 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:54:49.524835  283225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:54:49.524846  283225 out.go:358] Setting ErrFile to fd 2...
	I0407 13:54:49.524851  283225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:54:49.525024  283225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-242355/.minikube/bin
	I0407 13:54:49.525640  283225 out.go:352] Setting JSON to false
	I0407 13:54:49.526627  283225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":20236,"bootTime":1744013853,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:54:49.526735  283225 start.go:139] virtualization: kvm guest
	I0407 13:54:49.528569  283225 out.go:177] * [false-471753] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:54:49.530144  283225 notify.go:220] Checking for updates...
	I0407 13:54:49.530155  283225 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:54:49.531700  283225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:54:49.533288  283225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-242355/kubeconfig
	I0407 13:54:49.534559  283225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-242355/.minikube
	I0407 13:54:49.535772  283225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:54:49.536999  283225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:54:49.538910  283225 config.go:182] Loaded profile config "NoKubernetes-812476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:54:49.539064  283225 config.go:182] Loaded profile config "force-systemd-env-840043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:54:49.539188  283225 config.go:182] Loaded profile config "offline-crio-793502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0407 13:54:49.539307  283225 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:54:49.576359  283225 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 13:54:49.577533  283225 start.go:297] selected driver: kvm2
	I0407 13:54:49.577553  283225 start.go:901] validating driver "kvm2" against <nil>
	I0407 13:54:49.577565  283225 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:54:49.579713  283225 out.go:201] 
	W0407 13:54:49.580983  283225 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0407 13:54:49.582074  283225 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-471753 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-471753

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-471753

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-471753

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-471753

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-471753

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-471753

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-471753

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-471753

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-471753

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-471753

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-471753

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-471753" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-471753" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-471753

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-471753"

                                                
                                                
----------------------- debugLogs end: false-471753 [took: 2.865111043s] --------------------------------
helpers_test.go:175: Cleaning up "false-471753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-471753
--- PASS: TestNetworkPlugins/group/false (3.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (151.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1463253997 start -p stopped-upgrade-360931 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1463253997 start -p stopped-upgrade-360931 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m38.791278803s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1463253997 -p stopped-upgrade-360931 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1463253997 -p stopped-upgrade-360931 stop: (2.137664741s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-360931 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-360931 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.254724263s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (151.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (65.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-812476 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-812476 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m3.832983033s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-812476 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-812476 status -o json: exit status 2 (256.754644ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-812476","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-812476
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-812476: (1.093717373s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (65.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-812476 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0407 13:57:47.434231  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-812476 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.931950586s)
--- PASS: TestNoKubernetes/serial/Start (28.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-812476 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-812476 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.63491ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (30.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.081509978s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (16.275121227s)
--- PASS: TestNoKubernetes/serial/ProfileList (30.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-812476
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-812476: (1.360293067s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-812476 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-812476 --driver=kvm2  --container-runtime=crio: (23.416943943s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-360931
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-812476 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-812476 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.280497ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (89.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-440331 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-440331 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m29.842310388s)
--- PASS: TestPause/serial/Start (89.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0407 14:00:43.657195  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:01:00.586724  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m30.238118091s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-471753 "pgrep -a kubelet"
I0407 14:01:59.918719  249516 config.go:182] Loaded profile config "auto-471753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-471753 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dwnx4" [91aff09b-97c8-4374-bba3-0eea5d40a766] Pending
helpers_test.go:344: "netcat-5d86dc444-dwnx4" [91aff09b-97c8-4374-bba3-0eea5d40a766] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.0052285s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (72.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m12.400185823s)
--- PASS: TestNetworkPlugins/group/flannel/Start (72.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-471753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (59.940567704s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m22.040492381s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pn6nz" [2007ba8f-9815-4a4b-b73a-211dccfa3129] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.008336154s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-471753 "pgrep -a kubelet"
I0407 14:03:22.406259  249516 config.go:182] Loaded profile config "enable-default-cni-471753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-471753 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hfx4p" [2f4f6d58-7d4d-4dac-8610-63d61327fe24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hfx4p" [2f4f6d58-7d4d-4dac-8610-63d61327fe24] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003872599s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-471753 "pgrep -a kubelet"
I0407 14:03:24.584512  249516 config.go:182] Loaded profile config "flannel-471753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-471753 replace --force -f testdata/netcat-deployment.yaml
I0407 14:03:25.255714  249516 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gc6sp" [03d2d4c6-ccad-4652-b848-63082b0f2a8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gc6sp" [03d2d4c6-ccad-4652-b848-63082b0f2a8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005618276s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (16.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-471753 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-471753 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.197513082s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 14:03:49.848498  249516 retry.go:31] will retry after 970.928768ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-471753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (16.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-471753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-471753 "pgrep -a kubelet"
I0407 14:03:51.687147  249516 config.go:182] Loaded profile config "bridge-471753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-471753 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hndk7" [95d3cda9-0650-4e82-bb66-7f8285445c9b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hndk7" [95d3cda9-0650-4e82-bb66-7f8285445c9b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005838347s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m22.085715178s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m27.256195735s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-471753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (113.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-471753 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m53.087057628s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (113.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-smx2h" [7c9789d2-9cd0-40f0-9112-e438fbdbb05c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003708606s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-471753 "pgrep -a kubelet"
I0407 14:05:22.705090  249516 config.go:182] Loaded profile config "calico-471753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-471753 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-lwfmw" [36dd0e30-5744-4e85-b7ab-bd548a044acb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-lwfmw" [36dd0e30-5744-4e85-b7ab-bd548a044acb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004180096s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ljvx7" [d3c107cb-c615-42ad-b0e2-00b2efa04b42] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004372347s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-471753 "pgrep -a kubelet"
I0407 14:05:31.721609  249516 config.go:182] Loaded profile config "kindnet-471753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-471753 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7r7j9" [d6a3483a-7462-4ca9-b5cf-e5ad1395c578] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7r7j9" [d6a3483a-7462-4ca9-b5cf-e5ad1395c578] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005417094s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-471753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-471753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-421325 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-421325 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m12.110284176s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-471753 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-574417 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0407 14:06:00.586373  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
I0407 14:06:00.765101  249516 config.go:182] Loaded profile config "custom-flannel-471753": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-574417 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m23.85665228s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-471753 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-fcstw" [eb0fae18-8573-46cc-bc7a-b2115310db65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-fcstw" [eb0fae18-8573-46cc-bc7a-b2115310db65] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004290847s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-471753 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-471753 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (114.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-718753 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0407 14:07:00.209864  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:00.216368  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:00.227829  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:00.249316  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:00.291011  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:00.372719  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:00.534357  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:00.856516  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:01.498623  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:02.780009  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:05.341408  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-718753 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m54.862139931s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (114.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-421325 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d912a2ad-5523-42c5-8c54-329e797ca56e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0407 14:07:10.463512  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [d912a2ad-5523-42c5-8c54-329e797ca56e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004693288s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-421325 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-421325 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-421325 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-421325 --alsologtostderr -v=3
E0407 14:07:20.705471  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-421325 --alsologtostderr -v=3: (1m31.004828311s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-574417 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8de99da0-b585-49be-924d-8b3081ccd3d9] Pending
helpers_test.go:344: "busybox" [8de99da0-b585-49be-924d-8b3081ccd3d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8de99da0-b585-49be-924d-8b3081ccd3d9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004244489s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-574417 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-574417 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-574417 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-574417 --alsologtostderr -v=3
E0407 14:07:41.187222  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:07:47.433977  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:18.329819  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:18.336212  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:18.347631  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:18.369066  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:18.410614  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:18.492103  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:18.654096  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:18.975517  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:19.617463  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:20.899756  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-574417 --alsologtostderr -v=3: (1m31.016075744s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-718753 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d9625b02-4dee-4552-b46e-d40d8b66c0bc] Pending
E0407 14:08:22.148595  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:22.631714  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:22.638136  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:22.649527  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:22.671044  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:22.713277  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:22.794798  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:22.956399  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [d9625b02-4dee-4552-b46e-d40d8b66c0bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0407 14:08:23.278470  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:23.461109  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:23.920823  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:25.203186  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [d9625b02-4dee-4552-b46e-d40d8b66c0bc] Running
E0407 14:08:27.764522  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:28.583447  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003806924s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-718753 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-718753 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0407 14:08:32.885959  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-718753 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-718753 --alsologtostderr -v=3
E0407 14:08:38.825719  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:43.128088  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-718753 --alsologtostderr -v=3: (1m31.443548813s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-421325 -n no-preload-421325
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-421325 -n no-preload-421325: exit status 7 (65.748977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-421325 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (349.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-421325 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0407 14:08:51.955845  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:51.962269  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:51.973651  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:51.995224  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:52.036760  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:52.118545  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:52.280198  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:52.602063  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:53.243344  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:54.525449  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:57.087653  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:08:59.307582  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:09:02.209350  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:09:03.610178  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-421325 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m49.110602747s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-421325 -n no-preload-421325
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (349.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-574417 -n embed-certs-574417
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-574417 -n embed-certs-574417: exit status 7 (73.443115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-574417 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (300.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-574417 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0407 14:09:10.516764  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/addons-735249/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:09:12.450841  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:09:32.932867  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:09:40.269167  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:09:44.070511  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/auto-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:09:44.571862  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-574417 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m0.288284897s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-574417 -n embed-certs-574417
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (300.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-718753 -n default-k8s-diff-port-718753
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-718753 -n default-k8s-diff-port-718753: exit status 7 (77.078371ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-718753 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (301.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-718753 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0407 14:10:13.895300  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:16.470875  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:16.477455  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:16.488862  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:16.510334  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:16.551857  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:16.633407  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:16.795097  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:17.117428  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:17.759777  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:19.041668  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:21.603939  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:25.490969  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:25.497403  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:25.508757  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:25.530194  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:25.571663  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:25.653931  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:25.815950  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:26.138759  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:26.725694  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:26.781151  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:28.062795  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:30.625111  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:35.746524  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:36.967183  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:45.988874  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:10:57.448586  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:00.586160  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:01.003087  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:01.009531  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:01.020933  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:01.042360  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:01.083904  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:01.165405  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:01.327205  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:01.648937  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:02.190796  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:02.291289  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:03.572711  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:06.134260  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:06.471104  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:06.493488  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/enable-default-cni-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:11.255771  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:21.497913  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:35.817683  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:38.410451  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:41.979694  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:11:47.432639  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-718753 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m1.618033596s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-718753 -n default-k8s-diff-port-718753
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (301.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-405646 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-405646 --alsologtostderr -v=3: (3.297171041s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405646 -n old-k8s-version-405646: exit status 7 (69.608837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-405646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8j5hk" [7e5ca698-da81-4478-8a2f-d80ce509532d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004422375s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8j5hk" [7e5ca698-da81-4478-8a2f-d80ce509532d] Running
E0407 14:14:19.660059  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/bridge-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004439395s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-574417 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-574417 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-574417 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-574417 -n embed-certs-574417
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-574417 -n embed-certs-574417: exit status 2 (242.304984ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-574417 -n embed-certs-574417
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-574417 -n embed-certs-574417: exit status 2 (247.058775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-574417 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-574417 -n embed-certs-574417
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-574417 -n embed-certs-574417
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-541721 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-541721 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (47.594628551s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qkjjk" [62780035-bd57-4310-9014-d67bc02548fe] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qkjjk" [62780035-bd57-4310-9014-d67bc02548fe] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.003096701s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qkjjk" [62780035-bd57-4310-9014-d67bc02548fe] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00513498s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-421325 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-421325 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-421325 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-421325 --alsologtostderr -v=1: (1.041515926s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-421325 -n no-preload-421325
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-421325 -n no-preload-421325: exit status 2 (272.118227ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-421325 -n no-preload-421325
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-421325 -n no-preload-421325: exit status 2 (296.72128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-421325 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-421325 -n no-preload-421325
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-421325 -n no-preload-421325
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qpdc6" [c01297b0-6c91-41f3-9eba-5138404ec7f1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004791068s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qpdc6" [c01297b0-6c91-41f3-9eba-5138404ec7f1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003383846s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-718753 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-541721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-541721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.251492561s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-541721 --alsologtostderr -v=3
E0407 14:15:16.470462  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-541721 --alsologtostderr -v=3: (11.293918295s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-718753 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-718753 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-718753 -n default-k8s-diff-port-718753
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-718753 -n default-k8s-diff-port-718753: exit status 2 (243.522953ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-718753 -n default-k8s-diff-port-718753
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-718753 -n default-k8s-diff-port-718753: exit status 2 (233.920739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-718753 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-718753 -n default-k8s-diff-port-718753
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-718753 -n default-k8s-diff-port-718753
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-541721 -n newest-cni-541721
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-541721 -n newest-cni-541721: exit status 7 (65.555558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-541721 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0407 14:15:25.491454  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-541721 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0407 14:15:44.174415  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/calico-471753/client.crt: no such file or directory" logger="UnhandledError"
E0407 14:15:53.196728  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/kindnet-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-541721 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (34.732106607s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-541721 -n newest-cni-541721
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-541721 image list --format=json
E0407 14:16:00.586395  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/functional-709179/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-541721 --alsologtostderr -v=1
E0407 14:16:01.002149  249516 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-242355/.minikube/profiles/custom-flannel-471753/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-541721 -n newest-cni-541721
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-541721 -n newest-cni-541721: exit status 2 (246.063696ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-541721 -n newest-cni-541721
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-541721 -n newest-cni-541721: exit status 2 (238.50783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-541721 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-541721 -n newest-cni-541721
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-541721 -n newest-cni-541721
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.39s)

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.2/cached-images 0
15 TestDownloadOnly/v1.32.2/binaries 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 2.95
266 TestNetworkPlugins/group/cilium 3.32
281 TestStartStop/group/disable-driver-mounts 0.28
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-735249 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-471753 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-471753

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-471753

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-471753

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-471753

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-471753

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-471753

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-471753

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-471753

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-471753

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-471753

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-471753

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-471753" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-471753" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-471753

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-471753"

                                                
                                                
----------------------- debugLogs end: kubenet-471753 [took: 2.80433401s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-471753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-471753
--- SKIP: TestNetworkPlugins/group/kubenet (2.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-471753 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-471753" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-471753

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-471753" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-471753"

                                                
                                                
----------------------- debugLogs end: cilium-471753 [took: 3.174399859s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-471753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-471753
--- SKIP: TestNetworkPlugins/group/cilium (3.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-949853" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-949853
--- SKIP: TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                    
Copied to clipboard